Re: A bunch of general questions

Posted by matlabbe on
URL: http://official-rtab-map-forum.206.s1.nabble.com/A-bunch-of-general-questions-tp856p859.html

A FAQ section could be nice. At least the questions/answers on this forum can be easily found on Google.

g.bartoli wrote
1) How is the option PoseScanMatching actually working? Is it performing ICP between the laserscan (real or fake) and the global map (grid_map) to further correct the odometry? Do the "LccIcp..." parameters also affect PoseScanMatching behavior or is it independent? When the robot is still and no loop closure is not foud, often I see this message appearing... is this correlated to ScanMatching?
[pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. Relax your threshold parameters.
ICP is done between the scan of the new node added to the map and the scan of the previous node. LccIcp  (2D) parameters are used for PoseScanMatching. When the robot is not moving and parameters "RGBD/LinearUpdate=0" and "RGBD/AngularUpdate=0", ICP is still done. PoseScanMatching is not done in localization mode.

Note that PoseScanMatching with fake scan from Kinect, you may want to limit the depth used to avoid the problem shown in the paper "Unsupervised intrinsic calibration of depth sensors via SLAM" with depth over ~4 meters, which would add more errors to odometry than reducing them.

g.bartoli wrote
2) What is the recommended way to teleoperate a robot to map an unknown environment? First back and forth on the whole path, then making some rotations in some places to fill empty spaces or just go in one direction while performing one rotation every n meters?
I assume that you want to create a map for autonomous navigation, not only for visualization.

It depends on which sensors are used. If you are using a 2D laser rangefinder with >=180 degrees field of view, only one traversal could be ok (if odometry is relatively good and that scans are hitting walls). To increase the chance of loop closure detections, a second traversal in the reverse direction would be great to do (if the robot would have to navigate in different directions on the same path).

If you are using only a Kinect (with its limited field of view), multiple traversals in different directions would be required, while rotating at some places to find loop closures between the paths. I did most of my experiments using the approach above (with a 2D laser), so with the Kinect it is still open for ideas.

g.bartoli wrote
3) From what I understood, once the graph has at least one constraint (i.e. a loop closure), the algorithm (TORO, in my case) continuously searches for a local minimum in the function that computes visual error from poses and 3D features, right? I also think that once a local minimum has been found (respecting iterations and epsilon), that minimum should be used as a starting point for the next optimization leading to an even more stable configuration, and so on.
On the contrary, it seems that the optimization always starts from an initial graph configuration (the last loop closure?) ignoring the previous optimized step, so sometimes it continuously cycles among the same two or three sub-optimal configurations (local minima). Since only one of them is the best, when the next loop closure is found, you often risk to carry over a badly optimized graph that cannot be recovered anymore.
I'm aware that our odometry is a bit erroneous, but since the loop closures works very well in correcting the drift, I would like to know how to avoid that the next optimizations can make it worse. I have recorded two mapping sessions, the videos can be dowloaded from here (sorry for the .mkv format, but I'm unable to access YouTube from office):

- https://www.dropbox.com/s/9gywwyqi8onqn8v/mapping1.mkv?dl=0
- https://www.dropbox.com/s/44p4cgx81u501rn/mapping2.mkv?dl=0

This effect can be seen especially in the final part of the second one and this is happening also in the localization phase (if you want, I can make a video of it, too).
What are your suggestions about this? Is it all caused by low odometry accuracy or can it be enhanced by adjusting "RGBD/OptimizeIterations", "RGBD/OptimizeEpsilon" and "RGBD/OptimizeMaxError"?
I've fixed a bug yesterday with TORO where there was an early exit when iterating. In the same commit, I fixed g2o and GTSAM early exit using the epsilon parameter ("RGBD/OptimizeEpsilon"). Now the optimization is done based on the final results of the previous optimization (re-using last optimized poses as guess for the new optimization), so the optimization is not starting always from initial odometry poses. This may reduce the problem of different local minimas that you have observed.

Note that the graph optimization results may also change if some nodes in the map are transferred in LTM or retrieved from LTM. This can only happen if memory management is activated: "Rtabmap/TimeThr>0" or "Rtabmap/MemoryThr>0", and that one of those limits are reached.

Indeed, it seems that odometry drifts very fast when rotating. Better odometry would reduce the errors to minimize in the graph, so faster convergence. It would also help in Localization mode when no re-localization happens for some time.

g.bartoli wrote
4) Thinking about your suggested "robot_pose_ekf" approach to improve accuracy fusing visual and wheel odometry, do you know how can I handle the "lost odometry" message coming from "rgb_odometry"? Using the RGB-D sensor handheld, I used to recover tracking by going backwards to the last tracked frame, but on the robot the easiest way seems to call "reset_odom" service, but that resets odometry to identity transformation relying on a subsequent relocalization. Will "robot_pose_ekf" properly handle this situation?
Look at this post.

g.bartoli wrote
5) Is there any advantages of using "odom_frame_id" in rtabmap instead of subscribing to "odom" topic?
The advantage is that if on your robot the odometry is only provided through TF, you can use it directly. Also, rtabmap would only synchronize images and/or scan for its callback. If your odometry frequency is low, I would go for the odometry topic, particularly if visual odometry of rtabmap is used (the stamps of the odometry topic will match the same stamps of images used to compute it). With TF, you cannot have the odometry covariance.

g.bartoli wrote
6) In localization mode, when I set the navigation goal, sometimes I receive this message even if the point is very close to the graph:
[ERROR] (2015-11-18 12:29:31.209) Rtabmap.cpp:3750::updateGoalIndex() The nearest pose on the path not found! Aborting the plan...
[ WARN] [1447846171.262195673]: Planning: Plan failed!
To avoid this, do I need to increase "RGBD/DetectionRate" to create more nodes in the graph or is it better to adjust "RGBD/LocalRadius"?
It is better to increase "RGBD/LocalRadius".

g.bartoli wrote
7) I followed your AZ3 example launch file and I enabled the LccIcp 2D on loop closures. Whenever one has been found, does the ICP occur only on the matching nodes or on every graph node? For 2D navigation (x,y,angle), can Icp 3D achieve better results or it is not necessary?
LccIcp was most tested with laser scans (for which the 3D points are far more precise than a scan generated from the Kinect). ICP is only done between the matching nodes for loop closure detection. ICP 3D could be more precise than only 2D (if the scans were created from the Kinect), but it is more computationally expensive.

g.bartoli wrote
8) We are using ASUS Xtion Pro Live which exhibits a noticeable distortion in the depth cloud with points at distances greater than about 5 meters. I enabled the depth registration with openni2_launch and I also provided your calibration files (rgb_PS1080_PrimeSense.yaml and depth_PS1080_PrimeSense.yaml) to it, but the distortion seems to remain unchanged (you can see that looking at the fake scan in the previous videos, too). Apart from doing a proper IR calibration (I still have some problems in doing that...), do you think that "Kp/MaxDepth" shoud be set to about 5 meter to exclude distant points from the visual vocabulary that may decrease optimization accuracy?
Yes, but less loop closures may be found. For the distorsions, you can read the paper cited in the first question.

g.bartoli wrote
9) Other than building a 3D map of the environment and splitting depth cloud into obstacles and ground, does the cloud map have other uses for navigation purposes?
If you are doing 2D navigation, the 3D cloud (rtabmap/cloud_map) is not used. For remote visualization, it is better to subscribe to rtabmap/mapData (with MapCloud RVIZ plugin, a remote rtabmap_ros/map_assembler or with rtabmapviz) than rtabmap/cloud_map.

g.bartoli wrote
10) Is there an easy way to inspect optimization (TORO/g2o/GTSAM) and loop closures (RANSAC and ICP) progress without resorting to set the logger level to DEBUG? It would be nice to easily analyze objective function values, iterations, convergence and stop criterion to better tune the corresponding parameters.
In the commit of yesterday, I also added two new statistics that can be seen in rtabmapviz->Statistics view-> Loop panel: "Optimization error" (final optimization error) and "Optimization iterations" (how many iterations were done). Well if you are using rtabmap from source, to get in the log only optimization stuff without all DEBUG stuff, you can change the Logger level for optimization stuff to UWARN for example (don't forget to re-install rtabmap libraries so that rtabmap_ros can use the changes).

g.bartoli wrote
11) Is it possible to change the rtabmap logger levels (i.e. to DEBUG) using ROS parameters without rtabmapviz menu?
The logger level in rtabmapviz only affect rtabmapviz logged stuff, not rtabmap node.  For rtabmap node, the level can be set at start using the arguments of the node (--udebug or --uinfo). For convenience, I've just added "log_debug", "log_info", "log_warning" and "log_error" services for rtabmap and visual odometry nodes.

cheers,
Mathieu