I can't seem to find the Az3_bringup package that in the git repo your tutorial refers back to. Do you have it by chance? If not, do you have another such similar project that doesn't use the laser scan and uses a point cloud stream directly from a camera instead?
The azimut3 packages are in a private repo, but essentially the bringup package only starts the robot (base controllers, cameras, lidars, wheel odometry...) to make the robot ready to be controlled (waiting for cmd_vel topic).
I understand now, from this forum post, that visual odometry and wheel odometry are not being fused within rtabmap, correct?
We weren't thinking of just using this for visual odometry with a handheld system - we are trying to create a mobile robot that utilizes visual odometry from a kinect-style camera as well as wheel odometry from the robot's motor controller. For now we are utilizing simulation through gazebo to prove out the navigation portion.
Do you have (or are aware of) any complete examples in gazebo that showcase the robot_localization package fusing visual odometry with wheel odometry before sending it off to rtabmap to then send the occupancy grid map to move_base?
This is more a robot_localization question then. For our indoor robots, we rely more one wheel odometry alone (possibly fused with IMU with robot_localization) and lidar refinements done on rtabmap side. I don't have example of robot_localization fusing wheel and visual odometries, well maybe somewhere in a post on this forum but I don't remember. You may check this one: https://github.com/introlab/rtabmap_ros/blob/master/launch/tests/sensor_fusion.launch changing imu for the wheel odometry and adjusting the parameters of the EKF. Also publish_null_when_lost could be set to false (not sure why it was true in this example back in 2016).