|
How does rtabmap handle icp+vis slam? Does it combine both methods for more robust loop closure given that for instance camera and lidar is available?
|
|
Administrator
|
I think it is easier to understand if I compare all approaches.
A) Reg/Strategy=0 (Visual-only): on proximity detection or global loop closure detection, we re-compute visual correspondences between the two images and compute transform. B) Reg/Strategy=1 (Icp and partial visual): for proximity detections, visual estimation is not done, we do ICP directly between the laser scans and the guess. For global loop closure detections, we use BOW correspondences between two images (without recomputing correspondences), compute a visual transform, then feed that transform as guess to ICP. C) Reg/Strategy=2 (Icp and full visual): For either proximity detections or global loop closure detection, visual correspondences are re-computed, a visual transform is estimated, then ICP is done afterwards. It is mainly doing like Reg/Strategy=0, but we add ICP step at the end. Re-computing visual correspondences is more expensive than using already computed BOW ones but would give more correspondences, thus better transform estimation. For global loop closure, we can see with Reg/Strategy=1 that we skip that step (and use directly BOW correspondences) because we don't need the perfect guess for ICP, just one close enough and let ICP converges to best one. With Reg/Strategy=2, the difference is that we would provide a better guess to ICP. Note also that sometimes we may not get enough BOW correspondences to compute a guess with Icp/Strategy=1, unlike Icp/Strategy=2 would do. For proximity detection (one-to-one proximity), Reg/Strategy=2 would force to have a valid visual transform before doing ICP, unlike Reg/Strategy=1 (which visual step is not done in that case). So if a visual transform cannot be computed (e..g, the robot has 180 deg orientation difference between the two nodes), then Reg/Strategy=2 would not detect that proximity loop closure. Note however that if RGBD/ProximityPathMaxNeighbors>0 and the node has laser scan, we still do proximity by laser scan merging (one-to-many proximity), independently to what Reg/Strategy is set. cheers, Mathieu |
|
Thank you Mathieu, this explanation cleared multiple questions I had!
|
|
In reply to this post by matlabbe
Hi Mathieu,
Thanks for your answer. I'm starting to dig into rtabmap in order to understand how it works. Regarding proximity detection, does it only use laser scan? I mean, if I feed the rtabmap node with a complete pointcloud, does it use proximity detection to correct the drift in a corridor? In case it does not. Is it possible to feed rtabmap node with both pointcloud and laser scan? Again, in case it is not, if I only feed rtabmap with laserscan, how does this affect to the loop closure detection when doing the refinement using ICP? I feel like doing ICP with laserscan could be less robust than using pointcloud. Thanks in advance. |
|
Administrator
|
You can enable subscribe_scan_cloud, and remap its scan_cloud topic to any PointCloud2 topic. So yes, doesn't need to be lidar data, just any point cloud.
To feed laser scan and point cloud at the same time, the only way to do it is to merge them externally into a single point cloud combining both the lidar scan and the other point cloud. You may have do deal with deskewing externally (maybe with lidar_deskewing node) on both sensors before assembling them (could be with a point_cloud_aggregator node). It depends on the environment and your lidar (its range and coverage). For example, in indoor environment where the 2D lidar can hit the walls all around the robot (360 coverage), it can work well and would be pretty fast (less CPU). If you are using the point cloud from a TOF camera with limited FOV, doing ICP on that cloud may be worst, in particular if the camera is looking at a flat wall. But I would agree that between a 2D 360 lidar and a 3D 360 lidar, the 3D lidar one will be more robust in most cases. |
|
This post was updated on .
Hi,
Thanks for your reply. Regarding this issue, I have another question on how some Icp/* parameters work. Let's suppose this case: "Icp/Strategy": "1" # libpointmatcher I do not really understand which is the difference between Icp/OutlierRatio and Icp/CorrespondenceRatio. Moreover, how do Icp/MaxCorrespondenceDistance really affect the search of correspondencies in the pointcloud? Edit: I have also been investigating on some RGBD/* parameters regarding proximity dectection. I would like to know their relation with Icp/* parameters. As far as I'm concerned, these parameters: "RGBD/ProximityBySpace": "true" "RGBD/LocalRadius": "2.0" "RGBD/ProximityPathFilteringRadius": "1.0" "RGBD/ProximityMaxPaths": "0" "RGBD/ProximityPathMaxNeighbors": "5" "RGBD/ProximityMaxGraphDepth": "0" "RGBD/ProximityAngle": "45" "RGBD/NeighborLinkRefining": "true" in some way define which nodes of the graph are going to be taken into account for proximity detection. After the nodes are selected, here is where the libpointmatcher takes part to find pointcloud correspondencies (the Icp/* parameters). Am I right? Could I have a more in depth description of these parameters in order to know how to tune them? |
| Free forum by Nabble | Edit this page |
