http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot I have a question about the combination at the top of this page. Does this combine visual SLAM with lidar's sensor information? Or is it Visual SLAM combination of a and Lidar SLAM?
Sorry for the confusing explanation, but the point is that I want to know how Lidar is handled. Is it being used as a sensor, or as SLAM? Please let me know.
On the top configuration of that page, we use
1) wheel odometry for odometry,
2) then feed RGB-D images for loop closure detection and
3) 2D lidar for odometry and loop closure refining (ICP) and 2d occupancy grid.
For the second question, it is one or the other, you can set Grid/FromDepth to switch between lidar and camera. Note that using the lidar-only for occupancy grid, it is still possible to avoid 3D obstacles during navigation by feeding camera cloud to local costmap of move_base. Global plans are done in the clean lidar map, and navigation is using both, which is actually my favorite config.
Actually, in latest rtabmap version, Grid/FromDepth parameter is now called Grid/Sensor and camera-lidar config is supported. If
* Grid/Sensor=0, map is created from lidar,
* Grid/Sensor=1, map is created from depth image
* Grid/Sensor=2, map is created from both lidar and depth image
I see that the one I am using has Grid/FromDepth, which means that I can only select from either lidar or camera.
So if I use the latest version, I can create a map that reflects the sensor information from both? Where can I get that?" If I install it from "sudo apt install ros-melodic-rtabmap-ros", can I get the latest version and use Grid/Sensor?
We can inspect the database afterwards in rtabmap-databaseViewer to see how the local grids are created. Here we can see the combination of lidar (yellow) and camera depth used to create obstacle cells.
In particular, the table is not seen by the lidar, only the edge by the camera, and a line of obstacles is added. To make sure that lidar doesn't clear obstacles added by the camera, we would have to use Grid/3D (at cost of more computation) and make sure rtabmap is built with OctoMap, then as ray tracing will be done in 3D, normally the 2d lidar won't be able to clear 3d obstacles of the camera not seen by lidar.
With Grid/3D=true, we can see ray tracing from both sensors:
I have updated to the latest version of rtabmap. Then I set Grid/Sensor=2 and it throws up an error and I cannot create the map. I was able to do it in the demo as you taught above, but when I did Grid/Sensor = 2 in the simulation with my robot, the map was not generated.
I also get the following error (this was also in earlier versions), is it safe to ignore this?
" [pcl::IterativeClosestPoint::computeTransformation] Not enough correspondences found. relax your threshold parameters.
Then I set Grid/Sensor=2 and it throws up an error and I cannot create the map.
For the pcl error, it cannot do scan matching (it is not related to Grid/Sensor, but on Reg/Strategy>=1 or if you use icp_odometry), not enough correspondences. You could increase Icp/MaxCorrespondenceDistance.