Dear Mathieu,
Always mighty thanks for continued development and support for RTABMap! Your work is really outstanding in lots of ways. I am seeking advice on how to utilize multiple 2-d lidars for odometry and mapping. Obviously the scans need to be undistorted by using motion guess(from constant motion model or something) or wheel odometer integration results, and I see that you've already implemented this for a single 2-d laser scanner. However my robot configuration includes two 2-d lidars mounted at diagonal corners of my robot body. Currently I'm utilizing the following approach: 1. slice the laser scans since the field of view is only 270 degrees, as the quarter of the scans are blocked by robot body 2. merge the scans into a single message using ROS packages such as https://wiki.ros.org/ira_laser_tools 3. feed the merged scan into RTABMap But this approach forbids me to use the undistortion process due to the nature of ROS laserscan messages, as the message assumes a single lidar and only stores angular increment and range values. When the scans are merged, they are combined only spatially to fit the ROS laserscan message schema and the combined points are no longer in time-increasing order(because a point may be from scanner 1 or scanner 2, and they are almost certainly obtained at different time points, but this information is lost when it is fit into the laserscan message). Performing standard undistortion procedure on this data results in incorrectly undistorted scans. I've thought of two approaches to untangle this mess: 1. Before the laser scans are merged or fed into RTABMap, undistort them first using wheel odometer intergration results and merge them, and then feed the result into RTABMap. This advantage of this approach is that I do not need to heavily modify RTABMap's odometry code. 2. Modify RTABMap's odometry code to take in multiple laser scans and merge/undistort them inside the odometry code(by first converting them to pointcloud messages and so on). The disadvantage is that the resulting merged/undistorted scan needs to be forwarded to the mapper(rtabmap_slam), and I think that will complicate the code too much. Do you have any experience/ideas/recommendations on how to utilize multiple 2-d laser scanners with appropriate undistortion methods? I'm looking forward to hearing your opinions on this problem. Any kind of advice will be appreciated! Best regards, Jeonghwan Park |
Administrator
|
Hi Jeonghwan,
My answer would be very similar to the third option from this post: http://official-rtab-map-forum.206.s1.nabble.com/Using-multiple-lidars-with-rtabmap-td10172.html : I would add that only rtabmap_util/lidar_deskewing and rtabmap_util/point_cloud_aggregator may be used. To make sure icp_odometry and rtabmap nodes process the resulting PointCloud2 like a laser scan, you can set scan_cloud_is_2d parameter for both nodes. For deskewing outside icp_odometry node, this approach requires an external fixed frame (e.g., wheel odometry) that is accurate enough under the lidar rotation time. Without an external fixed frame, icp_odometry would need to be updated to support more than one lidar to deskew based on a constant motion model. EDIT: If you don't have wheel odometry but you have an IMU, it is possible to make a "fixed" frame from it with rtabmap_util/imu_to_tf node. rtabmap_util/lidar_deskewing won't be able to deskew along translation, but at least the rotation will be deskewed, which may be enough. Regards, Mathieu |
Whoops missed that post. Should've found it first, sorry for duplicate questions..
Thanks for the response! So if I understand correctly, using 'rtabmap_util/lidar_deskewing' and 'rtabmap_util/point_cloud_aggregator' is similar to my approach 1, is that correct? As I understand it those nodelets are put BEFORE the scans are fed to RTABMap, as a preprocessing step, right? If so, I should be careful to avoid wheel slip and overly aggressive motions since the only source for motion estimation during the scan time window is wheel odometry; is this correct as well? Thanks so much for your quick response and kindness! |
Administrator
|
Yes, it is similar to your first approach, as pre-processing steps.
wheel/imu odometry TF------------------------------ | \ \ v \ \ laser_scan1 -> lidar_deskewing \ --> icp_odometry | \ \ / \ | ---> point_cloud_aggregator --- \icp_odom v / \ \ laser_scan2 -> lidar_deskewing ----------->rtabmapYou can put them in same nodelet manager if your want. If you expect having a lot of wheels sleeping, you may combine wheel odometry with an IMU (e.g., robot_localization package) to at least having good orientation estimation all the time even if the wheels are slipping. Regards, Mathieu |
Your diagram is perfect for clear understanding, thanks so much again!
Best regards, Jeonghwan Park |
Free forum by Nabble | Edit this page |