A bunch of general questions

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

A bunch of general questions

g.bartoli
This post was updated on .
Hi Mathieu,
In the past weeks I spent a lot of time doing experiments with RTAB-Map in my setup, I had both successes and failures, so I came up with a list (with no particular order) of specific questions to ask you in order to fully understand the system and obtain the best performances.

I also thought it would have been a good idea to collect all of them in a single post for easier future searches and also to support other people here that may have similar doubts. Maybe one day this will help you to build a General FAQ or enrich the official documentation, who knows? ;)

1) How is the option PoseScanMatching actually working? Is it performing ICP between the laserscan (real or fake) and the global map (grid_map) to further correct the odometry? Do the "LccIcp..." parameters also affect PoseScanMatching behavior or is it independent? When the robot is still and no loop closure is not foud, often I see this message appearing... is this correlated to ScanMatching?
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
2) What is the recommended way to teleoperate a robot to map an unknown environment? First back and forth on the whole path, then making some rotations in some places to fill empty spaces or just go in one direction while performing one rotation every n meters?

3) From what I understood, once the graph has at least one constraint (i.e. a loop closure), the algorithm (TORO, in my case) continuously searches for a local minimum in the function that computes visual error from poses and 3D features, right? I also think that once a local minimum has been found (respecting iterations and epsilon), that minimum should be used as a starting point for the next optimization leading to an even more stable configuration, and so on.
On the contrary, it seems that the optimization always starts from an initial graph configuration (the last loop closure?) ignoring the previous optimized step, so sometimes it continuously cycles among the same two or three sub-optimal configurations (local minima). Since only one of them is the best, when the next loop closure is found, you often risk to carry over a badly optimized graph that cannot be recovered anymore.
I'm aware that our odometry is a bit erroneous, but since the loop closures works very well in correcting the drift, I would like to know how to avoid that the next optimizations can make it worse. I have recorded two mapping sessions, the videos can be dowloaded from here (sorry for the .mkv format, but I'm unable to access YouTube from office):

- https://www.dropbox.com/s/9gywwyqi8onqn8v/mapping1.mkv?dl=0
- https://www.dropbox.com/s/44p4cgx81u501rn/mapping2.mkv?dl=0

This effect can be seen especially in the final part of the second one and this is happening also in the localization phase (if you want, I can make a video of it, too).
What are your suggestions about this? Is it all caused by low odometry accuracy or can it be enhanced by adjusting "RGBD/OptimizeIterations", "RGBD/OptimizeEpsilon" and "RGBD/OptimizeMaxError"?

4) Thinking about your suggested "robot_pose_ekf" approach to improve accuracy fusing visual and wheel odometry, do you know how can I handle the "lost odometry" message coming from "rgb_odometry"? Using the RGB-D sensor handheld, I used to recover tracking by going backwards to the last tracked frame, but on the robot the easiest way seems to call "reset_odom" service, but that resets odometry to identity transformation relying on a subsequent relocalization. Will "robot_pose_ekf" properly handle this situation?

5) Is there any advantages of using "odom_frame_id" in rtabmap instead of subscribing to "odom" topic?

6) In localization mode, when I set the navigation goal, sometimes I receive this message even if the point is very close to the graph:
[ERROR] (2015-11-18 12:29:31.209) Rtabmap.cpp:3750::updateGoalIndex() The nearest pose on the path not found! Aborting the plan...
[ WARN] [1447846171.262195673]: Planning: Plan failed!
To avoid this, do I need to increase "RGBD/DetectionRate" to create more nodes in the graph or is it better to adjust "RGBD/LocalRadius"?

7) I followed your AZ3 example launch file and I enabled the LccIcp 2D on loop closures. Whenever one has been found, does the ICP occur only on the matching nodes or on every graph node? For 2D navigation (x,y,angle), can Icp 3D achieve better results or it is not necessary?

8) We are using ASUS Xtion Pro Live which exhibits a noticeable distortion in the depth cloud with points at distances greater than about 5 meters. I enabled the depth registration with openni2_launch and I also provided your calibration files (rgb_PS1080_PrimeSense.yaml and depth_PS1080_PrimeSense.yaml) to it, but the distortion seems to remain unchanged (you can see that looking at the fake scan in the previous videos, too). Apart from doing a proper IR calibration (I still have some problems in doing that...), do you think that "Kp/MaxDepth" shoud be set to about 5 meter to exclude distant points from the visual vocabulary that may decrease optimization accuracy?

9) Other than building a 3D map of the environment and splitting depth cloud into obstacles and ground, does the cloud map have other uses for navigation purposes?

10) Is there an easy way to inspect optimization (TORO/g2o/GTSAM) and loop closures (RANSAC and ICP) progress without resorting to set the logger level to DEBUG? It would be nice to easily analyze objective function values, iterations, convergence and stop criterion to better tune the corresponding parameters.

11) Is it possible to change the rtabmap logger levels (i.e. to DEBUG) using ROS parameters without rtabmapviz menu?

As always, many thanks!
Guido
~Guido
Reply | Threaded
Open this post in threaded view
|

Re: A bunch of general questions

matlabbe
Administrator
A FAQ section could be nice. At least the questions/answers on this forum can be easily found on Google.

g.bartoli wrote
1) How is the option PoseScanMatching actually working? Is it performing ICP between the laserscan (real or fake) and the global map (grid_map) to further correct the odometry? Do the "LccIcp..." parameters also affect PoseScanMatching behavior or is it independent? When the robot is still and no loop closure is not foud, often I see this message appearing... is this correlated to ScanMatching?
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
ICP is done between the scan of the new node added to the map and the scan of the previous node. LccIcp  (2D) parameters are used for PoseScanMatching. When the robot is not moving and parameters "RGBD/LinearUpdate=0" and "RGBD/AngularUpdate=0", ICP is still done. PoseScanMatching is not done in localization mode.

Note that PoseScanMatching with fake scan from Kinect, you may want to limit the depth used to avoid the problem shown in the paper "Unsupervised intrinsic calibration of depth sensors via SLAM" with depth over ~4 meters, which would add more errors to odometry than reducing them.

g.bartoli wrote
2) What is the recommended way to teleoperate a robot to map an unknown environment? First back and forth on the whole path, then making some rotations in some places to fill empty spaces or just go in one direction while performing one rotation every n meters?
I assume that you want to create a map for autonomous navigation, not only for visualization.

It depends on which sensors are used. If you are using a 2D laser rangefinder with >=180 degrees field of view, only one traversal could be ok (if odometry is relatively good and that scans are hitting walls). To increase the chance of loop closure detections, a second traversal in the reverse direction would be great to do (if the robot would have to navigate in different directions on the same path).

If you are using only a Kinect (with its limited field of view), multiple traversals in different directions would be required, while rotating at some places to find loop closures between the paths. I did most of my experiments using the approach above (with a 2D laser), so with the Kinect it is still open for ideas.

g.bartoli wrote
3) From what I understood, once the graph has at least one constraint (i.e. a loop closure), the algorithm (TORO, in my case) continuously searches for a local minimum in the function that computes visual error from poses and 3D features, right? I also think that once a local minimum has been found (respecting iterations and epsilon), that minimum should be used as a starting point for the next optimization leading to an even more stable configuration, and so on.
On the contrary, it seems that the optimization always starts from an initial graph configuration (the last loop closure?) ignoring the previous optimized step, so sometimes it continuously cycles among the same two or three sub-optimal configurations (local minima). Since only one of them is the best, when the next loop closure is found, you often risk to carry over a badly optimized graph that cannot be recovered anymore.
I'm aware that our odometry is a bit erroneous, but since the loop closures works very well in correcting the drift, I would like to know how to avoid that the next optimizations can make it worse. I have recorded two mapping sessions, the videos can be dowloaded from here (sorry for the .mkv format, but I'm unable to access YouTube from office):

- https://www.dropbox.com/s/9gywwyqi8onqn8v/mapping1.mkv?dl=0
- https://www.dropbox.com/s/44p4cgx81u501rn/mapping2.mkv?dl=0

This effect can be seen especially in the final part of the second one and this is happening also in the localization phase (if you want, I can make a video of it, too).
What are your suggestions about this? Is it all caused by low odometry accuracy or can it be enhanced by adjusting "RGBD/OptimizeIterations", "RGBD/OptimizeEpsilon" and "RGBD/OptimizeMaxError"?
I've fixed a bug yesterday with TORO where there was an early exit when iterating. In the same commit, I fixed g2o and GTSAM early exit using the epsilon parameter ("RGBD/OptimizeEpsilon"). Now the optimization is done based on the final results of the previous optimization (re-using last optimized poses as guess for the new optimization), so the optimization is not starting always from initial odometry poses. This may reduce the problem of different local minimas that you have observed.

Note that the graph optimization results may also change if some nodes in the map are transferred in LTM or retrieved from LTM. This can only happen if memory management is activated: "Rtabmap/TimeThr>0" or "Rtabmap/MemoryThr>0", and that one of those limits are reached.

Indeed, it seems that odometry drifts very fast when rotating. Better odometry would reduce the errors to minimize in the graph, so faster convergence. It would also help in Localization mode when no re-localization happens for some time.

g.bartoli wrote
4) Thinking about your suggested "robot_pose_ekf" approach to improve accuracy fusing visual and wheel odometry, do you know how can I handle the "lost odometry" message coming from "rgb_odometry"? Using the RGB-D sensor handheld, I used to recover tracking by going backwards to the last tracked frame, but on the robot the easiest way seems to call "reset_odom" service, but that resets odometry to identity transformation relying on a subsequent relocalization. Will "robot_pose_ekf" properly handle this situation?
Look at this post.

g.bartoli wrote
5) Is there any advantages of using "odom_frame_id" in rtabmap instead of subscribing to "odom" topic?
The advantage is that if on your robot the odometry is only provided through TF, you can use it directly. Also, rtabmap would only synchronize images and/or scan for its callback. If your odometry frequency is low, I would go for the odometry topic, particularly if visual odometry of rtabmap is used (the stamps of the odometry topic will match the same stamps of images used to compute it). With TF, you cannot have the odometry covariance.

g.bartoli wrote
6) In localization mode, when I set the navigation goal, sometimes I receive this message even if the point is very close to the graph:
[ERROR] (2015-11-18 12:29:31.209) Rtabmap.cpp:3750::updateGoalIndex() The nearest pose on the path not found! Aborting the plan...
[ WARN] [1447846171.262195673]: Planning: Plan failed!
To avoid this, do I need to increase "RGBD/DetectionRate" to create more nodes in the graph or is it better to adjust "RGBD/LocalRadius"?
It is better to increase "RGBD/LocalRadius".

g.bartoli wrote
7) I followed your AZ3 example launch file and I enabled the LccIcp 2D on loop closures. Whenever one has been found, does the ICP occur only on the matching nodes or on every graph node? For 2D navigation (x,y,angle), can Icp 3D achieve better results or it is not necessary?
LccIcp was most tested with laser scans (for which the 3D points are far more precise than a scan generated from the Kinect). ICP is only done between the matching nodes for loop closure detection. ICP 3D could be more precise than only 2D (if the scans were created from the Kinect), but it is more computationally expensive.

g.bartoli wrote
8) We are using ASUS Xtion Pro Live which exhibits a noticeable distortion in the depth cloud with points at distances greater than about 5 meters. I enabled the depth registration with openni2_launch and I also provided your calibration files (rgb_PS1080_PrimeSense.yaml and depth_PS1080_PrimeSense.yaml) to it, but the distortion seems to remain unchanged (you can see that looking at the fake scan in the previous videos, too). Apart from doing a proper IR calibration (I still have some problems in doing that...), do you think that "Kp/MaxDepth" shoud be set to about 5 meter to exclude distant points from the visual vocabulary that may decrease optimization accuracy?
Yes, but less loop closures may be found. For the distorsions, you can read the paper cited in the first question.

g.bartoli wrote
9) Other than building a 3D map of the environment and splitting depth cloud into obstacles and ground, does the cloud map have other uses for navigation purposes?
If you are doing 2D navigation, the 3D cloud (rtabmap/cloud_map) is not used. For remote visualization, it is better to subscribe to rtabmap/mapData (with MapCloud RVIZ plugin, a remote rtabmap_ros/map_assembler or with rtabmapviz) than rtabmap/cloud_map.

g.bartoli wrote
10) Is there an easy way to inspect optimization (TORO/g2o/GTSAM) and loop closures (RANSAC and ICP) progress without resorting to set the logger level to DEBUG? It would be nice to easily analyze objective function values, iterations, convergence and stop criterion to better tune the corresponding parameters.
In the commit of yesterday, I also added two new statistics that can be seen in rtabmapviz->Statistics view-> Loop panel: "Optimization error" (final optimization error) and "Optimization iterations" (how many iterations were done). Well if you are using rtabmap from source, to get in the log only optimization stuff without all DEBUG stuff, you can change the Logger level for optimization stuff to UWARN for example (don't forget to re-install rtabmap libraries so that rtabmap_ros can use the changes).

g.bartoli wrote
11) Is it possible to change the rtabmap logger levels (i.e. to DEBUG) using ROS parameters without rtabmapviz menu?
The logger level in rtabmapviz only affect rtabmapviz logged stuff, not rtabmap node.  For rtabmap node, the level can be set at start using the arguments of the node (--udebug or --uinfo). For convenience, I've just added "log_debug", "log_info", "log_warning" and "log_error" services for rtabmap and visual odometry nodes.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: A bunch of general questions

g.bartoli
This post was updated on .
1) Ok, I understood the LccIcp parameters. Is there a reason why PoseScanMatching is not used in Localization mode? Shouldn't it give better accuracy also for navigation by registering laser_scan onto grid_map?

2) Ok!

3) Ok, I will recompile and try this again, thanks! I'm also looking forward to improve wheel odometry, in particular for in place rotations.

4) Ok, we don't have a IMU, so Odom/ResetCountdown seems the best choice, thanks! Furthermore, I see that a seemingly more complete package for EKF fusion exists in ROS, it is called robot_navigation. Do you have any experience with it? Do you think it can be better than robot_pose_ekf? Here is the developer announcement.

5) Our motor_driver node publiches both /odom (without covariance) and /tf (using differential wheel model) at 50 Hz. I'm planning to integrate RGBD Visual Odometry, so I think connecting /odom to rtabmap is the best choice.

6) Ok!

7) Ok, understood. However, I noticed that when I tried ICP 3D (this also happens with RGBD/OptimizeSlam2d = false), I had problems with navigation, since the graph is optimized in XYZ and it is not planar, so every time I set a Navigation Goal I receive an error that the final pose has an invalid Z coordinate, but it is not a problem, since my application is for non-holonomic 2D navigation.

8) Do you think is there a way to apply the calibration obtained in the paper to the ASUS Xtion Pro? I read that they perform a computationally expensive SLAM approach to derive the calibration, but once it has been obtained, it can be applied in under 2ms to the depth image to get a far better point cloud. However, they did not publish any opensource library or final calibration file for the sensors they tested.

9, 10, 11) Nice, thanks!

However, as others already stated, let me say that RTAB-Map is one of the best written piece of software that I ever used, congratulations Mathieu! :)
~Guido
Reply | Threaded
Open this post in threaded view
|

Re: A bunch of general questions

matlabbe
Administrator
1) PoseScanMatching is for convenience, as this kind of odometry correction could be done before the rtabmap node. In localization mode, the latest node is always deleted at the end of the iteration, so it cannot be used for PoseScanMatching. To be able to use it in Localization mode, the previous node should be saved, maybe in Rtabmap class.

4) the robot_navigation package seems nice, but I never used it though (as well as robot_pose_ekf).

7) Using ICP 3D, we cannot force 2D transformation. In the next version 0.11, we should be able to do that.

8) Once the depth calibration matrix is computed, it can be applied very fast. I think I've seen a code somewhere to do that (I think it is this dead link referenced from IntrinsicCalibration tutorial). If you can compute this matrix, you could multiply it to the depth image before rtabmap node.

Thank you for the encouragement!

Mathieu