I am considering integrating RTAB-Map data with motor odometry like you explained in this Setup
I suppose the global odometry should be more robust with data coming also from encoders, but how are they combined with RTAB-Map? Is there a Kalman filter, correlation technique or they are simply averaged? What happens if the two sources become inconsistent because of poor visual reference and/or wheels slipping on the floor?
The odometry input of the rtabmap node assumes that the odometry given is consistent. A null value to tell that odometry is lost, a non-null value that is always consistent with the previous states, or Identity value to tell that the odometry was reset and a new map should be created.
If you connect the odometry computed from the wheel encoders directly in rtabmap, only wheel encoders are used for odometry (there is no fusion with visual information). If the wheels are slipping, wrong odometry will be sent to rtabmap and the map will have errors (wrong constraints).
If you want to combine visual odometry and wheel odometry, you should combine them before rtabmap: