Re: Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)
Posted by matlabbe on
URL: http://official-rtab-map-forum.206.s1.nabble.com/Combine-3D-Lidar-and-RGB-D-Camera-to-create-a-single-3D-point-cloud-sensor-fusion-tp10162p10362.html
If the clouds are not aligned when the robot is not moving:
* The transform between the camera and the lidar may not be accurate (sometimes rotation error can cause bigger issue at longer distance), so a lidar/camera calibration may be required.
* The scale of the camera point cloud is maybe off (I would expect the lidar scale should be close to reality), because of some bad calibration. A small experiment to verify this is to measure with a tape the distance between the camera and a wall, and compare with the values published by the depth image.
If the clouds are not aligned when the robot is moving:
* Check if the camera and lidar are using timestamps synced with same computer's clock.
* You can try with to enable "odom_sensor_sync:=true" for rtabmap node, this will adjust camera point cloud/lidar based on their stamp and odometry received.
Unless the camera can see stuff that the lidar cannot see, you may consider just using the lidar for the occupancy grid.
cheers,
Mathieu