Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view

Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

Alex Frantz
This post was updated on .

I am quite new to RTAB-MAP for ROS2 and I have used RTAB-MAP in ROS1 for simple map/3D point cloud creation using an RGB-D camera (Realsense D455) in the past.

Currently, I am running a ROS2 Gazebo simulation with a virtual robot. The robot is equipped with a 3D LiDAR (Livox), and an RGB-D camera (Realsense D455). I am trying to use RTAB-MAP with ICP odometry (using only the 3D LiDAR), and map the environment using both the point cloud generated from the LiDAR and the RGB-D camera. My goal is to have a combined point cloud from both sensors.

So far, I have RTAB-MAP mapping properly using only the LiDAR and ICP Odometry. A 3D point cloud is created as expected, and projected properly on the map frame. I used the velodyne example from the ROS2 branch of RTAB-MAP on github as a reference.

However, when I try to add the RGB-D Camera I am facing a few issues. When I try to use both the camera and the LiDAR, it seems that the resulting point clouds are not merged (I have tried using the Point Cloud Assembler, and Point Cloud Aggregator, nodes but my approach does not seem to work).

To clarify, I am using:
   - ICP-Odometry node using LiDAR input
   - Base RTAB-MAP node (for SLAM) which uses both the LiDAR input and RGB-D input (rgb and depth data from the camera).
   - Point Cloud assembler node using the data from  'odom_filtered_input_scan' topic, and RGB-D topics from the camera

The result of this are two point clouds, which are not combined to produce a single one. The RGB-D point cloud is not even overlaid on the point cloud produced by the LiDAR. See bellow

2 3D point clouds, LiDAR and RGB-D Camera

I tried troubleshooting the problem in the following way:

 1. Launch RTAB-MAP using only RGB-D odometry and with the Camera as the main sensor.
 2. Check the TF-Tree, and the transformations from the Camera frame to the base_footprint frame.
 3. Check the robot URDF model.

My conclusions are the following:

- When launching RTAB-MAP using only RGB-D odometry, the point cloud seems to be transformed in a weird way (see bellow). As I drive around, the point cloud get's worse and worse.. as expected since I suspect a frame transformation not being properly computed.

- When checking the TF-Tree and transformations everything seems to be correct (see attached image)

- The robot URDF seems fine too.

I am quite perplexed because everything works fine with the 3D LiDAR but all problems come from using the camera (even in single RGB-D camera mapping..)

If anyone can help me out that would be very nice, and if you want more explanations or clarifications I am happy to provide them. Sorry if I am misusing some terms, I am quite new to this..

To run the simulation, you can have a look out this repo I am using (

I am also attaching the rtabmap launch file, I am using to combine the two sensors (RGB-D Camera and 3D Lidar).


- Alex


Edit: added a missing figure
Reply | Threaded
Open this post in threaded view

Re: Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)


It seems that the rgbd_camera config in the URDF is wrong, or it is not compatible with the gazebo version I am using. To make the camera work correctly, I had to hard-code these values (based on that example):
diff --git a/leo_description/urdf/macros.xacro b/leo_description/urdf/macros.xacro
index 98cdf40..b440345 100644
--- a/leo_description/urdf/macros.xacro
+++ b/leo_description/urdf/macros.xacro
@@ -750,11 +750,10 @@
-              <width>${gazebo_camera_width}</width>
-              <height>${gazebo_camera_height}</height>
+              <width>320</width>
+              <height>240</height>
-            <horizontal_fov>${gazebo_camera_horizontal_fov}</horizontal_fov>
-            <vertical_fov>${gazebo_camera_vertical_fov}</vertical_fov>
+            <horizontal_fov>1.047</horizontal_fov>
It is because the cx/cy seem stuck at 160/120, which only work if resolution is 320/240. For the FOV, if I set something else, the cloud doesn't match anymore the lidar cloud.



For rtabmap config using lidar and camera at the same time, here an updated version of your launch file