RGB_odometry's precision?

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

RGB_odometry's precision?

kabuto
This post was updated on .
Hi Mathieu,

I'm currently working with a robot that operates in a 2D environment, utilizing an RGBD camera (similar to RealSense) for visual odometry. I've set Force3D to true, which means the output from the visual odometry will only include X, Y, and Yaw. My testing area is relatively small, approximately 4x4 meters, with almost none moving objects. The robot is also running at a low speed of 0.1 m/s and 0.1 rad/s.

I’m using F2M odometry with 1000 features per frame, and I’ve observed that the quality is around 300 features per frame. I have several questions: I’m using both the front and back cameras as two dedicated visual odometry instances, allowing each camera to generate its own odometry. Can the presence of these two VO instances affect RTAB-Map's processing? From my tests, it seems that RTAB-Map works perfectly with two ICP odometry instances.

To improve precision, I manually placed multiple QR codes throughout the environment. These QR codes are also within the laser range of the camera, ensuring that the depth information should be quite accurate. However, I haven’t noticed significant quality improvements with this approach.

Do you have any suggestions to enhance the visual odometry?

Here is my setting for the VO, i'm also turning on the approx_sync.
 ComposableNode(
                package='rtabmap_odom',
                plugin='rtabmap_odom::RGBDOdometry',
                name='rgbd_odometry_front',
                parameters=[{
                    'frame_id':'camera_front_link',
                    'qos':qos,
                    'qos_image':qos,
                    'qos_imu':qos,
                    'qos_camera_info':qos,
                    # 'log_to_rosout_level':1,
                    "wait_imu_to_init": True,
                    'expected_update_rate':30.0,
                    'Reg/Force3DoF':'true',
                    'RGBD/NeighborLinkRefining':'true',
                    'subscribe_odom_info':True,
                    'publish_tf':False,
                    "subscribe_rgbd": True,
                    'Vis/MaxFeatures':'1500',
               
                    'Optimizer/GravitySigma':'0', # Disable imu constraints (we are already in 2D)
                }],
                remappings=[
                    ('odom','front_camera/odom'),
                    ('imu', 'imu/data'),
                    ("rgbd_image", '/rgbd_sync_front/rgbd_image'),
                   
                ]
            )



Best,
Herbert
Reply | Threaded
Open this post in threaded view
|

Re: RGB_odometry's precision?

matlabbe
Administrator
If you were using a realsense camera, I would tell you to use the IR cameras instead of the RGB to get larger field-of-view, and also to have perfectly synchronized depth aligned with the left camera.

The biggest factors for VO accuracy are: the field of view of the camera, synchronization between the cameras and the type of shutter (rolling vs global).

With multi-camera, it can be beneficial to combine both cameras in same rgbd_odometry node. rtabmap needs to be built with OpenGV and rtabmap_ros with "-DRTABMAP_SYNC_MULTI_RGBD=ON". Note however that the camera stamps should be sync on same clock, and preferably hardware sync between the cameras, though if frame rate is high and robot is slow, approx sync can be "okay".
Reply | Threaded
Open this post in threaded view
|

Re: RGB_odometry's precision?

kabuto
HI Mathieu,

Thank you for your quick response!
But my camera can do both IR/RGD/Depth with resolution of 1280x800, is it still worth to use the IR sensor?
Reply | Threaded
Open this post in threaded view
|

Re: RGB_odometry's precision?

kabuto
In reply to this post by matlabbe
I've got one more question, if i've have a relative precise wheel odom, is the parameter "guess_frame_id":"odom" would help for the VO accuracy?
Reply | Threaded
Open this post in threaded view
|

Re: RGB_odometry's precision?

matlabbe
Administrator
In reply to this post by kabuto
If the IR has larger field of view, probably. Also, are they also using a rolling or global shutter? How is the depth generated? from the same IR camera? Using RGB + Depth mode may require a registration between IR camera used to generate the depth and the RGB camera, and they may be also not exactly synchronized (like the IR and depth would be if depth is generated from the same IR sensor)
Reply | Threaded
Open this post in threaded view
|

Re: RGB_odometry's precision?

matlabbe
Administrator
In reply to this post by kabuto
Yes, and it may be also useful when visual odometry is lost. By enabling at the same time "Odom/ResetCountdown" (>0), if the visual odometry gets lost, the TF tree will still be available for the robot and wheel odometry would be used during that time. Once visual odometry can reset, it will restart accordingly to wheel odometry motion during that time.