Re: Improving a lidar-only setup

Posted by matlabbe on
URL: http://official-rtab-map-forum.206.s1.nabble.com/Improving-a-lidar-only-setup-tp11071p11227.html

I looked at the rosbags with camera, and below is a launch file integrating both cameras with the 3D lidar. Note that this commit is required to make it work.

# example.launch.py: Example lidar3d with 2 realsense cameras

import os

from ament_index_python.packages import get_package_share_directory

from launch import LaunchDescription, LaunchContext
from launch.actions import DeclareLaunchArgument, OpaqueFunction
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node, SetParameter
from launch.actions import IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource

import tempfile

def launch_setup(context: LaunchContext, *args, **kwargs):
  
  lidar3d_launch_file = 'lidar3d.launch.py'
  return [
    SetParameter(name='use_sim_time', value=LaunchConfiguration('use_sim_time')),
  
    # Sync rgb/depth/camera_info together for the first camera
    Node(
        package='rtabmap_sync', executable='rgbd_sync', name='rgbd_sync1', output='screen',
        parameters=[{'approx_sync': False}],
        remappings=[('rgb/image', '/runner_2/obstacle/left/camera/color/image_raw'),
                    ('rgb/camera_info', '/runner_2/obstacle/left/camera/color/camera_info'),
                    ('depth/image', '/runner_2/obstacle/left/camera/aligned_depth_to_color/image_raw'),
                    ('rgbd_image', '/runner_2/obstacle/left/camera/rgbd_image')]),
    
    # Sync rgb/depth/camera_info together for the second camera
    Node(
        package='rtabmap_sync', executable='rgbd_sync', name='rgbd_sync2', output='screen',
        parameters=[{'approx_sync': False}],
        remappings=[('rgb/image', '/runner_2/obstacle/right/camera/color/image_raw'),
                    ('rgb/camera_info', '/runner_2/obstacle/right/camera/color/camera_info'),
                    ('depth/image', '/runner_2/obstacle/right/camera/aligned_depth_to_color/image_raw'),
                    ('rgbd_image', '/runner_2/obstacle/right/camera/rgbd_image')]),
                                    
    # Sync two cameras together
    Node(
        package='rtabmap_sync', executable='rgbdx_sync', output='screen',
        parameters=[{'approx_sync': True, 'rgbd_cameras': 2}],
        remappings=[('rgbd_image0', '/runner_2/obstacle/left/camera/rgbd_image'),
                    ('rgbd_image1', '/runner_2/obstacle/right/camera/rgbd_image'),
                    ('rgbd_images', '/runner_2/obstacle/rgbd_images')]),
  
    # Filtering imu
    Node(
        package='imu_filter_madgwick', executable='imu_filter_madgwick_node', output='screen',
        parameters=[{'use_mag': False, 'publish_tf': False}],
        remappings=[('imu/data_raw', '/runner_2/localization/livox/imu/raw'),
                    ('imu/data', '/runner_2/localization/livox/imu')]),

    # Launch rtabmap
    IncludeLaunchDescription(
        PythonLaunchDescriptionSource(os.path.join(
            get_package_share_directory('rtabmap_examples'), 'launch/lidar3d.launch.py')),
        launch_arguments={'use_sim_time': LaunchConfiguration('use_sim_time'),
                          'voxel_size': '0.2',
                          'frame_id': 'runner_2/base_link',
                          'lidar_topic': '/runner_2/localization/livox/points/raw',
                          'imu_topic': '/runner_2/localization/livox/imu',
                          'rgbd_images_topic': '/runner_2/obstacle/rgbd_images'}.items()),
  ]
  
def generate_launch_description():
  return LaunchDescription([
    # Launch arguments         
    DeclareLaunchArgument(
      'use_sim_time', default_value='false',
      description='Use simulated time'),

    OpaqueFunction(function=launch_setup),
  ])

Usage:
ros2 launch example.launch.py use_sim_time:=true

ros2 bag play fast-848 --clock --qos-profile-overrides-path durability_override.yaml --remap  /tf:=/tf_ignored
See previous posts for "durability_override.yaml" (tested on humble).

I noticed that the camera data is actual not that useful because the depth is poorly estimated. Looking at the IR images, we can see a lot of glare / reflections in the cameras, possibly the cause why depth estimation is very poor. I am not sure what is causing the glare (is there another glass in front of the camera? are the lens dirty?).

This example use RGB-D data from realsense cameras, it could be also possible to use left IR + Depth for better accuracy, but IR emitter would need to be disabled.

cheers,
Mathieu