1) I would like to compare the self-positioning of the camera with lidar and the camera alone. Is there any good way to do this? Ideally, I'd like to be able to display the path as a line.
2) Also, when I'm using wheel+imu for odometry and rtabmap for camera only, sometimes the map suddenly goes wrong: the robot on rviz suddenly jumps and the map is generated in dozens. What is the cause of this and how can I fix it?
3) I want to use Lidar to make up for the shortcomings of visual slam, can I do that with rtabmap? Can I use rtabmap to do that? If so, I would like to know what can be complemented and how to do it.
I apologize for the length of the above three points, but please help me!
1) Do you want to run them at the same time (exactly the same trajectory)? You may do a rosbag of both camera and lidar, then launch rtabmp either with camera or with lidar configuration. You can then export the poses of the two generated databse in RGB-D Dataset format and use their tool to compare the paths. Ref: https://vision.in.tum.de/data/datasets/rgbd-dataset/tools
2) You have to debug if the jump is caused by a wrong loop closure (rtabmap) or if the jump is happening from for wheel_imu odometry. If it is from the odometry, revise your odometry node. If it is coming from rtabmap, decrease covariance set in your odometry twist, that way rtabmap won't accept loop closures deforming too much the map.
3) What kind of lidar? But yes, we can use lidar for the local localization (refine wheel odom, do icp_odometry) and still keep the camera for global loop closure detection.
1) you can set ground_truth_frame_id and ground_truth_base_frame_id parameters (those are TF frames corresponding to ground truth system, typically /world and /tracker, assuming your ground truth system publishes on TF), then you will see in rtabmap both ground truth and slam trajectories, even with live RMSE error between them (a Statistics->Gt panel will appear when rtabmap is receiving ground truth data).