This post was updated on .
Hi,
We're using a Livox Mid360 as the single sensor with rtabmap. For mapping, we use its IMU for the guess frame for icp_odometry, which in turn provides the odom frame for rtabmap. For localization, we use wheel odometry instead (250Hz and very accurate) and don't use the IMU or icp_odometry anymore. Rtabmap is running at 1Hz, the livox pointcloud output is running at 10Hz (we don't assemble pointclouds). Lidar deskewing is set up and synced with the Livox clock using PTP. This gif is rviz visualizing the deskewed lidar topic in the odometry frame (tf2) (no rtabmap here, just visualizing the input for rtabmap), I see barely any theta accumulation error and neither do I see distortion in the scan, so I assume this is set up correctly. ![]() The area we're working in is a warehouse with ceiling about 30m high and walls 50m away. The majority the livox sees is those and some large pillars, as there's very little tall structures above 2m of the ground and the livox just sees a few parts below 2m above the ground. (and things there often change as well) Now we're wondering how to improve the accuracy of rtabmap. It's a bit tricky to understand what parameters would improve/worsen certain observed problems. If you have input on any of these or general recommendations, that would be greatly appreciated. 1. Arriving on a location from opposing directions doesn't align up well sideways. For the same location the difference is very consistent (within 1cm), but the amount of consistent sideways difference varies from location to location (in ranges up to 15cm) 2. When driving towards a predefined location, the pose rtabmap reports is consistent within 1cm as long as the speed at which we arrived there is the same. If we drive faster towards there, the reported pose tends to lean more towards the right. (up to 10cm further to the right then driving there slowly). This difference is consistent on the same location, but other locations have a different consistent difference. 3. Scan assembling: I assume this is less useful during localization. However, can we have scan assembling during mapping and not during localization? Or does rtabmap require to get a similar scancloud as it got for keypoints? 4. The livox has 360 view, but let's say we mapped in a straight line going e.g. east, it seems that you can't localize in that map if you drive that straight line going west. I assume this is because the pose-graph doesn't related keypoints with different angles but with the same coordinates. Is there some setting to change that? 5. Variances for odometry: If we want rtabmap's 'best absolute pose guess' is it good to put these to e.g. 1e-1, to give higher weight to rtabmap's guess? The aim would be to get as consistent absolute poses regardless of how you got to that location. I would assume putting this to 1e-3 or 1e-4 would make it heavily dependent on odometry not making any mistakes. 6. Voxel size: Can we freely adjust these even if we use a map made at a different voxel size? (similar question for other parameters, are there some that are constrained by what you set them as during mapping?) 7. MaxCorrespondenceDistance: lowering/increasing this, what effect does this have on accuracy and/or rtabmap being able to keep tracking where it is? We're running ros2 jazzy and using the rtabmap binaries. The parameters we're using: rtabmap.yaml Example to hopefully clarify what I mean with the 'sideways' difference. Depending on speed and direction, the vehicle ends up differently (this is due to localization, not control of the robot, I verified) ![]() |
Administrator
|
Hi,
If you can share a rosbag of the raw lidar data + wheel odom + imu + tf and tf_static, that could be easier for me to recommend which parameters to tune in your case. For your questions: 1. If it varies between localizations, maybe there are some errors in the map, so the robot localizes 10-15 cm the reality. 2. The localization should be independent of speed. Furthermore, I would assume as soon as the robot stops, if it based on speed, it will then know it is off by 10 cm and re-correct itself. If localization is still 10 cm off after stopping and the robot doesn't re-localize correctly, then it would be a map issue (independently of the speed). The speed would influence the deskewing, but as you verified that deskewing works as expected, that could be a map issue. 3. You can assembled only for mapping, only for localization, or both. The only thing is that with assembling the point cloud would be denser, then would influence the parameter "Icp/CorrespondenceRatio", which would need to be set smaller. Using assembled clouds could also give different normals in the point clouds than without assembling, so if Icp/PointToPlane is used, it could affect ICP registration. 4. You should be able to localize in any direction. One type of proximity detection can be affected by RGBD/ProximityAngle parameter, for which you can set to 180 to accept in opposite direction. If the robot is coming back in opposite direction more than 1 meter away, parameter RGBD/ProximityPathFilteringRadius can also be increased. 5. This is mostly relevant for mapping phase. If odometry is not reliable or drifts fast, a large variance should be set, in the other case it is better to keep it as low as possible. The reason is that odometry is used to determinate if a loop closure is good or not. Setting to low while in reality the odometry drifts more could make RTAB-Map rejecting good loop closures. Setting too high while in reality the odometry drift is low could make RTAB-Map accepting bad loop closures. If you do a loop of 100 meters with the robot ending exactly at the same location it started and the drift is 1 meter, then you have a 1% odometry drift. If the robot is moving 50 cm per sec, a variance of 0.05*0.05 could be used. If you use ICP odometry, a covariance will be computed for you based on matching results. 6. Same answer than 3. 7. As a rule of thumb, you can set Icp/MaxCorrespondenceDistance as 10 times the voxel size. That is what I do in my examples. Setting it larger can help to detect loop closures / localize where the robot thinks it is more farther/closer from the reality. In most cases, the ICP results will be pretty much the same. cheers, Mathieu |
Free forum by Nabble | Edit this page |