External Odometry integration

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

External Odometry integration

AlmogT
Hi everyone,

I'm working on a handheld SLAM system and would appreciate guidance on how to properly integrate RTAB-Map into my pipeline to generate the most accurate 3D map.

My current setup includes:
Custom Visual-Inertial Odometry (VIO): Based on stereo cameras and IMU. I plan to feed RTAB-Map a post-processed, smooth trajectory (i.e., already loop-closed, no jumps).

LiDAR: A solid-state 3D LiDAR with an 80° horizontal FOV and 10 Hz scan rate, producing relatively dense short-range point clouds (up to ~60m).

IMU: XSens IMU at 100 Hz. It is already used within the VIO, but I can also publish it separately if useful for RTAB-Map.

My goals:
Use RTAB-Map only for 3D map building, not for localization.
Feed the VIO trajectory as external odometry input.
Integrate LiDAR point clouds into the map.
Optionally provide IMU data, if it improves mapping quality.

My questions:
Are there any examples of a similar setup? For example, a launch file that shows how to integrate external odometry with RTAB-Map?

Given the LiDAR's limited 80° FOV, are there recommended parameters or techniques to improve scan matching or overall map quality?

Should I provide IMU data to RTAB-Map even though it’s already used in the VIO? Would this enhance mapping accuracy or risk introducing redundant/noisy data?

Any key RTAB-Map parameters you recommend tuning for this type of hybrid setup? (e.g., memory management, keyframe thresholds, sensor fusion settings)

My stereo cameras are approximately synchronized (within a few milliseconds). How should I feed them into RTAB-Map? Do I need to insert a synchronization node between the image topics?

Below is the RTAB-Map launch configuration I’m currently using:

rtabmap_slam = Node(
    package='rtabmap_slam', executable='rtabmap', output='screen',
    parameters=[{
        'namespace': 'rtabmap',
        'frame_id': 'scaner_base',
        'subscribe_depth': False,
        'subscribe_rgb': False,
        'subscribe_scan_cloud': True,
        'approx_sync': True,
        'wait_for_transform': 0.2,
        'use_sim_time': use_sim_time,
        # RTAB-Map's internal parameters are strings:
        # 'odom_frame_id': 'world',
        'Grid/FromDepth': 'False',
        'Grid/3D': 'True',            # Enable 3D grid mapping
        'Grid/Color': 'True',         # Enable color in the grid mapping
        'Grid/RayTracing': 'False',   # Optional: Enable ray tracing if not using depth
        'Grid/MapFrame': 'map',       # Ensure this matches your TF setup
    }],
    remappings=[
        ('scan_cloud', '/lidar/scan_3D'),
        ('imu', '/imu/data'),
        ('odom', '/state_optimizer/odometry_corrected')
    ],
    arguments=[
        '-d'  # This will delete the previous database (~/.ros/rtabmap.db)
    ])



However, with this setup, I’m currently getting a poor-quality map — the map is sparse, inconsistent, and not well-aligned with the environment. I'm not sure if the issue is related to TF frames, sensors sync, or odometry integration.

Any insights or advice would be greatly appreciated!
Reply | Threaded
Open this post in threaded view
|

Re: External Odometry integration

matlabbe
Administrator
Hi AlmogT

Are there any examples of a similar setup? For example, a launch file that shows how to integrate external odometry with RTAB-Map?
It would be similar to what you did, just start rtabmap node with odom topic remapped to your odometry topic, or set odom_frame_id to use TF. In both cases, your VIO node should provide corresponding TF of the odom topic.

Given the LiDAR's limited 80° FOV, are there recommended parameters or techniques to improve scan matching or overall map quality?
It depends at which level you need to improve scans registration. If your VIO is accurate enough, you may only need to care about loop closure detection. Loop closure detection only works if you provide stereo or RGB-D input at the same time. With Reg/Strategy=1, a first visual guess will be computed, then refined with ICP using your lidar data.

If VIO is not accurate enough and you need to do scan registration between consecutive scans, then you could enable RGBD/NeighborLinkRefining. Note that you have to setup appropriately the ICP parameters to make it work efficiently.

Should I provide IMU data to RTAB-Map even though it’s already used in the VIO? Would this enhance mapping accuracy or risk introducing redundant/noisy data?
No need, but if your VIO is always aligned with gravity, you can enable Mem/UseOdomGravity so that the map optimization stays aligned with gravity.

Any key RTAB-Map parameters you recommend tuning for this type of hybrid setup? (e.g., memory management, keyframe thresholds, sensor fusion settings)
If you are gonna sync lidar + stereo camera, you can enable "odom_sensor_sync" to adjust better where the frames should be when sensors have different timestamps. If you want to provide stereo data, use rtabmap_util/stereo_sync or rtabmap_util/rgbd_sync node, then enable subscribe_rgbd for rtabmap node and connect the rgbd_image topic.

My stereo cameras are approximately synchronized (within a few milliseconds). How should I feed them into RTAB-Map? Do I need to insert a synchronization node between the image topics?
Yes, use rtabmap_util/stereo_sync or rtabmap_util/rgbd_sync node as explained in previous point.

However, with this setup, I’m currently getting a poor-quality map — the map is sparse, inconsistent, and not well-aligned with the environment. I'm not sure if the issue is related to TF frames, sensors sync, or odometry integration.
If you can share a database showing the misalignment you are seeing, that could be helpful. When dealing with multiple sensors, make sure their timsetamps are synchronized with same clock (e.g., system clock) and the TF between the sensors is accurate (may require extrinsics calibration in some cases).