Using multiple lidars with rtabmap

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Using multiple lidars with rtabmap

GiladB
Hi,
I'm interested in the possibility to integrate 2 lidars into rtabmap to improve localization accuracy.
I have two 2D-lidars on a robot, each with close to 270 degrees FOV that together cover nicely the full 360 degrees surrounding of the robot.
I try to understand how it will be best to integrate it to rtabmap if I choose to enable the subscribe_scan feature.

These are the 3 options that I came up with:
1) Integrate just 1 lidar
2) publish both scans under the same topic. Each have a different frame_id and min/max angle values so maybe it will be good enough for rtabmap to understand how to use it (Probably not how it was planned to be used, but maybe it is robust enough?)
3) Merge the two scans under a virtual scan with a virtual frame_id. in this case, since the frame_id is not the actual lidar's frame_id, I will need to define some inaccurate min/max range values which I don't know if they will affect rtabmap. Another possible issue could be the fact that each lidars are working in 33hz and they are not synced (Maybe if the robot moves rapidly, it will do more harm than good?)

I would love to hear what you think.

Thanks,
Gilad
Reply | Threaded
Open this post in threaded view
|

Re: Using multiple lidars with rtabmap

matlabbe
Administrator
Hi,

1) possible, though most limited approach
2) if the scans don't overlap, the second scan received will be lost, though if the robot rotate, both lidar may be able to localize in the local scan map eventually. Time synchronization between lidars may not be important in this approach, but the robot will localize only with one of the two lidar at a time.
3) Do you have wheel odometry at higher frame rate than 33 Hz and that is relatively accurate for 30 ms? That could be useful to deskew the scans and assemble them together in same frame (e.g., base_link). That could be done with a combination of rtabmap_util/lidar_deskewing and rtabmap_util/point_cloud_aggregator (and maybe pointcloud_to_laserscan node to feed a 2D scan instead of 3D point cloud), then feed resulting cloud to rtabmap/icp_odometry. Other option to combine the scans is using ira_laser_tools, though not sure it account for displacement between the scan timestamps (though it may be ignored as scan rate is high and if the robot doesn't very fast motions). In all cases, you may end up with a cloud or scan with frame_id in the middle of the robot (e.g., base_link), so the ray tracing in the map will be done from that point instead of each lidar point of view. For the static global map it can be not too bad, however if you are going to use navigation stack, I would feed the lidar independently to local/global costmaps.

cheers,
Mathieu