Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

Alex Frantz
This post was updated on .
Hi!

I am quite new to RTAB-MAP for ROS2 and I have used RTAB-MAP in ROS1 for simple map/3D point cloud creation using an RGB-D camera (Realsense D455) in the past.

Currently, I am running a ROS2 Gazebo simulation with a virtual robot. The robot is equipped with a 3D LiDAR (Livox), and an RGB-D camera (Realsense D455). I am trying to use RTAB-MAP with ICP odometry (using only the 3D LiDAR), and map the environment using both the point cloud generated from the LiDAR and the RGB-D camera. My goal is to have a combined point cloud from both sensors.

So far, I have RTAB-MAP mapping properly using only the LiDAR and ICP Odometry. A 3D point cloud is created as expected, and projected properly on the map frame. I used the velodyne example from the ROS2 branch of RTAB-MAP on github as a reference.

However, when I try to add the RGB-D Camera I am facing a few issues. When I try to use both the camera and the LiDAR, it seems that the resulting point clouds are not merged (I have tried using the Point Cloud Assembler, and Point Cloud Aggregator, nodes but my approach does not seem to work).

To clarify, I am using:
   - ICP-Odometry node using LiDAR input
   - Base RTAB-MAP node (for SLAM) which uses both the LiDAR input and RGB-D input (rgb and depth data from the camera).
   - Point Cloud assembler node using the data from  'odom_filtered_input_scan' topic, and RGB-D topics from the camera

The result of this are two point clouds, which are not combined to produce a single one. The RGB-D point cloud is not even overlaid on the point cloud produced by the LiDAR. See bellow

2 3D point clouds, LiDAR and RGB-D Camera

I tried troubleshooting the problem in the following way:

 1. Launch RTAB-MAP using only RGB-D odometry and with the Camera as the main sensor.
 2. Check the TF-Tree, and the transformations from the Camera frame to the base_footprint frame.
 3. Check the robot URDF model.

My conclusions are the following:

- When launching RTAB-MAP using only RGB-D odometry, the point cloud seems to be transformed in a weird way (see bellow). As I drive around, the point cloud get's worse and worse.. as expected since I suspect a frame transformation not being properly computed.






- When checking the TF-Tree and transformations everything seems to be correct (see attached image)



- The robot URDF seems fine too.

I am quite perplexed because everything works fine with the 3D LiDAR but all problems come from using the camera (even in single RGB-D camera mapping..)

If anyone can help me out that would be very nice, and if you want more explanations or clarifications I am happy to provide them. Sorry if I am misusing some terms, I am quite new to this..


To run the simulation, you can have a look out this repo I am using (https://github.com/Althyrios/leo_simulator-ros2).

I am also attaching the rtabmap launch file, I am using to combine the two sensors (RGB-D Camera and 3D Lidar). rtabmap_livox-rgbd.py

Thanks!

- Alex

---

Edit: added a missing figure
Reply | Threaded
Open this post in threaded view
|

Re: Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

matlabbe
Administrator
Hi,

It seems that the rgbd_camera config in the URDF is wrong, or it is not compatible with the gazebo version I am using. To make the camera work correctly, I had to hard-code these values (based on that example):
diff --git a/leo_description/urdf/macros.xacro b/leo_description/urdf/macros.xacro
index 98cdf40..b440345 100644
--- a/leo_description/urdf/macros.xacro
+++ b/leo_description/urdf/macros.xacro
@@ -750,11 +750,10 @@
           <camera>
             
-            <horizontal_fov>${gazebo_camera_horizontal_fov}</horizontal_fov>
-            <vertical_fov>${gazebo_camera_vertical_fov}</vertical_fov>
+            <horizontal_fov>1.047</horizontal_fov>
             <clip>
               <near>${gazebo_camera_color_clip_near}</near>
               <far>${gazebo_camera_color_clip_far}</far>
It is because the cx/cy seem stuck at 160/120, which only work if resolution is 320/240. For the FOV, if I set something else, the cloud doesn't match anymore the lidar cloud.

Before

After


For rtabmap config using lidar and camera at the same time, here an updated version of your launch file rtabmap_livox-rgbd_ML.py


cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

Alex Frantz
Hi again,

Sorry for the late reply on this thread. Thanks a lot for taking the time to reply to my questions!

I have tried to incorporate your proposed solution and here are my findings:

1. The camera's intrinsic parameters were indeed wrong. I too cannot use any other values than the one you suggested. Unsure as to why this is the case currently... The camera's depth cloud is not properly aligned with the floor.

2. Regarding the launch config for rtabmap to produce a point cloud which merges inputs from both the RGBD camera and the 3D LiDAR, I have run into some issues. When I use the code you provided, the point cloud generated only seems to use the LiDAR data and ignores RGBD input. However, I noticed that you had set the parameter:

RGBD/CreateOccupancyGrid : False.

I experimented and set this one to true, and at the same time added another parameter: Grid/Sensor : 2 , which to my understanding, allows RTABMAP to generate a point cloud using both 3D LiDAR and Depth data. With these two changes, I have been able to get what I need. Yet, I am not sure if this is the correct approach or if I missed something in your response to get the results I desire.

Would be possible to confirm or correct my approach please?

Cheers,

Alex
Reply | Threaded
Open this post in threaded view
|

Re: Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

matlabbe
Administrator
Hi Alex,

If you need a single point cloud combining lidar and RGB-D cloud, yes, using "Grid/Sensor: 2" can do the job.

However, if you don't need the combined cloud online, it is possible to export both point clouds afterwards separately and combine them in CloudCompare/MeshLab.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

Alex Frantz
Hi Mathieu,

Thanks for the speedy reply! Okay, this clarifies things now. Indeed I need a merged point cloud online, so this solution works well for me :)

In parallel, I have been transitioning from the Gazebo simulation to a real robot, using the same configuration (RTABMAP, robot config, etc...). I ran however into a new "problem".

When running the launch configuration for RTABMAP on the real robot, (with Grid/Sensor:2), the final point cloud (the merge between RGB-D camera and LiDAR) seems to not be aligned properly. In fact, I notice a considerable displacement between the two (see images). What's more, is that in the Gazebo simulation this displacement is completely absent, which has lead to some confusion on my end.

At first I thought that perhaps RTABMAP does not look at the transforms from Camera -> Base_Footprint and LiDAR -> Base_Footprint when merging both point clouds (i.e. simply adds all the points into a single PointCloud2 message without checking whether they should be aligned further), but this does not seem to make sense because in the simulation, RTABMAP correctly merges them and aligns them as expected (no shift in the two point clouds).

Another point to add is that both robot descriptions, real and in simulation, are identical.

Here are some screenshots to give you a visual example:

Simulation:




Real Example:



In the real example, there is a shift between white and colored points, where in fact they should be one on top of another (i.e. part of the same object). This does not happen in the simulation.


----

The RTABMAP launch file remains the same as the one you have provided in the above solution (same thread), just with two parameters changed:

1. "RGBD/CreateOccupancyGrid: true"
2. "Grid/Sensor: 2"

I am subscribing to the "/cloud_map" topic which is visualized in the screenshots, and the reference frame for RVIZ is "/map".

In terms of the sensor placement on the real/simulated robot, I am also attaching a picture of the robot here:



Red circle ---> LiDAR Livox MID360
Yellow Rectangle ---> RealSense RGB-D Camera D455

It might be naive to suggest, but the displacement noted in the merged pointcloud seems to be more or less the same as the distance between the RGB-D camera and the Lidar.. could this be linked to cause of this issue?

My main concern is to figure out a way to properly merge the point clouds in the real-life scenario. Any ideas?

Thanks a lot and I look forward to your reply!

Please let me know if you need more details, I am happy to provide them!

Cheers,

Alex
Reply | Threaded
Open this post in threaded view
|

Re: Combine 3D Lidar and RGB-D Camera to create a single 3D point cloud (sensor fusion)

matlabbe
Administrator
If the clouds are not aligned when the robot is not moving:
 * The transform between the camera and the lidar may not be accurate (sometimes rotation error can cause bigger issue at longer distance), so a lidar/camera calibration may be required.
 * The scale of the camera point cloud is maybe off (I would expect the lidar scale should be close to reality), because of some bad calibration. A small experiment to verify this is to measure with a tape the distance between the camera and a wall, and compare with the values published by the depth image.

If the clouds are not aligned when the robot is moving:
 * Check if the camera and lidar are using timestamps synced with same computer's clock.
 * You can try with to enable "odom_sensor_sync:=true" for rtabmap node, this will adjust camera point cloud/lidar based on their stamp and odometry received.

Unless the camera can see stuff that the lidar cannot see, you may consider just using the lidar for the occupancy grid.

cheers,
Mathieu