Posted by
derektan1995 on
URL: http://official-rtab-map-forum.206.s1.nabble.com/Tips-needed-to-improve-on-RTAB-Mapping-for-legged-Robots-tp7417p7476.html
Hi Mathieu,
Thanks the your reply. A few other follow-up questions:
1)
matlabbe wrote
You coudl try setting "odom_sensor_sync" to true.
I don't see any paramters on the rtabmap node using rtabmap --param. I assume you are referring to
'rtabmap_ros/data_odom_sync' node? From the launch file I posted above, I am currently already using the '
rtabmap_ros/rgbd_sync' node.
It seems like the rgbd_sync node outputs the topic 'rgbd_image', yet the 'data_odom_sync' node takes in the 'rgb/image_in' and 'depth/image_in' topics individually. How should we reconcile the differences between the output and inputs of the different nodes?
2)
Ok sounds good! I have set my 'MaxCorrespondenceDistance' parameter to 1.5 instead of 0.5, since that yields a better overall map. Nodes will have proximity links (shown in yellow) to other nodes that are further away,
allowing for more map shaping over longer distances. Is my analysis accurate?
3a)
matlabbe wrote
The depth map is used to get 3D positions of the visual features extracted from the RGB image. This is used mainly for visual loop closure detection. In rtabmapviz/Preferences/3D rendering, you hide the 3D map and show only the scan map.
It seems like the construction of the 3d depth map will happen regardless of whether I choose to hide it or not, since the keypoints extracted from the image uses these 3d position information. Lets say for each 1920x1080 image (~2mil pixels) that I am processing, I am only extracting at most 1000 keypoints. However, it seems like all depth pixels are used for the construction of this 3d visualization, rather than just the 1000 keypoints.
I wonder if this will result in excessive memory consumption during mapping mode, since all we need are the positions of the 1000 keypoints for loop closure detection? I intend to use this visual loop closure feature, while constructing a 2d lidar map, over a long period of time and long distances.
3b)
matlabbe wrote
You could indeed feed a 3D scan with subscribe_scan_cloud, this may help to do 6DoF mapping instead 2D mapping has the lidar is not always horizontal to ground when the robot is walking.
Will using scan_cloud (3d point cloud) instead of scan (2d lidar scans) provide me with
more accurate localization 2d pose? It seems that there will be more features presented in a 3d point cloud for more accurate localization, at the expense of computation speed. In addition, I read from
another of your forum post that lidar scan matching (libpointmatcher) uses scan-to-scan odometry correction. I wonder if it is possible to
opt for scan-to-map odometry correction instead?
It's really fascinating to see what this software is capable of. Thanks for your help!
Take Care,
Derek