Global Optimizing with Loop Closure and Navigation After Mapping.

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Global Optimizing with Loop Closure and Navigation After Mapping.

imran05
Hello,
I'm working on mapping and navigation in an apple orchard using the rtabmap SLAM with stereo odometry from an OAK-D-PR-W POE camera. I'm following the ROS tutorial for stereo handheld mapping (https://wiki.ros.org/rtabmap_ros/Tutorials/StereoHandHeldMapping).
I have two questions:

Mapping issue: When moving from the 2nd to 3rd row in the orchard, my map sometimes creates errors. These errors occasionally get fixed when loop closure is detected and optimizes the overall map. However, sometimes loop closure doesn't optimize the map well, resulting in duplicated rows and poor mapping quality. Is there a way to delete or refine specific poses that are causing problems in the mapping? I've attached an image of my map showing this issue.
database link : https://drive.google.com/file/d/1Y5tr5iF25V79tsQ_Gwst_YG-Ni_uftZf/view?usp=sharing
Navigation issue: I'm using the teb local planner with move_base. During navigation, my robot initially localizes itself within the map correctly, but when I provide waypoint goals, it sometimes loses localization in the middle of the rows. This causes incorrect positioning on the map and consequently navigation problems. Is there a method to improve localization using only visual information, without adding additional sensors?

Thank you for your help!
Reply | Threaded
Open this post in threaded view
|

Re: Global Optimizing with Loop Closure and Navigation After Mapping.

matlabbe
Administrator
Hi,

It looks like there are some stereo issues that is causing the VO to drift more.

1) The stereo rectification doesn't look perfect, there is a 1 pixel vertical shift between the left and right cameras:


2) Bad time sync between left and right cameras causing very large covariance on some links like this:


Looking at the images, we clearly see that the left image is synced with the wrong right image:


I've shown on the right the resulting point clouds for two consecutive frames. The red one is generated from the top image, where we see that disparity is way larger than the one below for similar point of view. I think this issue appears to be worst when the robot is rotating at the end of each row. I would try to get a good map before trying to navigate in it.

Is there a method to improve localization using only visual information, without adding additional sensors?
Here would be the steps to improve VSLAM:
1) Improve stereo calibration,
2) Fix stereo sync,
3) If VO looks drifting too much even after fixing 1 and 2, you may check to integrate a VIO approach instead,
4) For visual localization, using SIFT/SURF/SuperPoint could help to localize over time. I see you are outdoor, classic features like ORB/BRIEF/SIFT/SURF are quite sensible to illumination changes / shadows, features like SuperPoint may be more robust in those cases.

cheers,
Mathieu