Tips for Optimizing RGB-D Visual Odometry with Intel RealSense D435i

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Tips for Optimizing RGB-D Visual Odometry with Intel RealSense D435i

andrycapod

Hi Mathieu,
first of all thank you very much for the incredible work you’re doing with RTAB-Map — it’s an amazing project.

I’m currently experimenting with RGB-D odometry to complement the laser odometry of my mobile robot, and I would really appreciate some advice on fine-tuning a few parameters.

Context
My goal is to use RGB-D visual odometry in environments where my robot’s 2D laser odometry (with wheel odometry as initial guess) has difficulties — for example long corridors with very few geometric features.
The idea is to fuse both odometries later, but before that I’m trying to get a robust and stable RGB-D odom.

At the moment I am running all experiments on a ROS 2 (Humble) rosbag for which I also have a ground-truth trajectory available. This allows me to directly compare the RGB-D odometry against the reference data while fine-tuning parameters.

Hardware setup
- Differential drive robot
- 2D LIDAR (25 m range)
- Intel RealSense D435i (IMU integrated, depth + IR)

The robot currently uses laser odometry, which works well except in feature-poor environments.

I read in some older forum posts that you often recommend using Infrared + Depth for RGB-D odometry rather than RGB.
I would like to ask if you still recommend that approach and if you have updated suggestions for resolutions and RealSense driver parameters.

I’ve seen that suggested resolutions are:
- 848×480 for D435(i)
- 1280×720 for D455

In the near future I may also test a D455, but for now I’m working with the D435i.

Questions
Do you have recommended settings for:
- IR stream
- Depth stream
- Frame rate
- Emitter settings
- RealSense filters (temporal, spatial, hole filling…)

RGB-D Odometry parameters (F2M)
Below is my current configuration.
Are there any parameters you think I should review or tune more carefully to improve performance?

        Node(
            package='rtabmap_odom',
            executable='rgbd_odometry',
            name='rgbd_odometry',
            output='screen',
            emulate_tty=True,
            parameters=[
            {
                "use_sim_time": True,
                "frame_id": "base_footprint",
                "odom_frame_id": "odom",
                "publish_tf": False,
                "ground_truth_frame_id": "",
                "ground_truth_base_frame_id": "",
                "wait_for_transform": 0.4,
                "wait_imu_to_init": True,
                "always_check_imu_tf": True,
                "approx_sync": False,
                "approx_sync_max_interval": 0.2,
                "config_path": "",
                "topic_queue_size": 10,
                "sync_queue_size": 10,
                "qos": 2,
                "qos_camera_info": 2,
                "qos_imu": 2,
                "subscribe_rgbd": True,
                "guess_frame_id": "odom",
                "guess_min_translation": 0.0,
                "guess_min_rotation": 0.0,
                "always_process_most_recent_frame": False, #'Odometry: always process latest frame to reduce delay, skipping frames in case odometry is slower than camera frame rate. In case you want to make sure to process all frames (e.g., from a rosbag/dataset) and you don\'t care about delay, set this to false.'),
                "keep_color": True,
                "publish_null_when_lost": False,    
                "Reg/Force3DoF": "True",
                "Odom/ResetCountdown": "20",
                "OdomF2M/InitDepthFactor": "0.05",
                "OdomF2M/MaxNewFeatures": "400",
                "OdomF2M/MaxSize": "1500",
                "OdomF2M/ValidDepthRatio": "0.5",
                "Vis/MaxFeatures": "1200",
                "Vis/MinInliers": "12",
                "Vis/InlierDistance": "0.014",
                "OdomF2M/BundleAdjustment": "1",
                "OdomF2M/BundleAdjustmentMaxFrames": "4",
                "OdomF2M/BundleAdjustmentMinMotion": "0.02",
            },
            ],
            remappings=[
            ('rgb/image', '/rgbd_camera/infra1/image_rect_raw'),
            ('depth/image', '/rgbd_camera/depth/image_rect_raw'),
            ('rgb/camera_info', '/rgbd_camera/infra1/camera_info'),
            ('rgbd_image', '/rtabmap/rgbd_image'),
            ('odom', '/rtabmap/odom'),
            ('imu', '/rgbd_camera/imu/data_filtered'),
            ]
        ),

IMU integration in RGB-D Odometry
I’m also uncertain about how the IMU is actually integrated inside the RGB-D odometry pipeline. I obtain the IMU data from a Madgwick filter running on the RealSense IMU.

Could you clarify:
- how IMU data contributes to motion prediction?
- whether the IMU can help stabilize motion estimation in texture-less environments?

Conclusion
Any guidance on the RealSense configuration, IMU usage or odometry parameters would be really helpful.
Thanks again for your outstanding work and for the support you always provide on this forum!

Best regards,
Andrea