About Odometry of RTAB-map

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

About Odometry of RTAB-map

jlyw1017
This post was updated on .
According to this paper
RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation,
the motion estimation is done by PnP RANSAC.
but according to this answer
http://official-rtab-map-forum.206.s1.nabble.com/How-does-RGBD-Slam-works-td191.html
it is done by 3D features feature RANSAC
I'm kind of confused here. So how it is done?
To my knowledge, PnpRANSAC is more like a mono camera case. Match 2D points to the known 3D features.
With RGBD and Stereo we have the 3D inforamtion, why do we still need PnP RANSAC in the paper mentioned
Thank you for your repaly
Reply | Threaded
Open this post in threaded view
|

Re: About Odometry of RTAB-map

matlabbe
Administrator
Hi,

Note that the post you referred is from 2015, now the default is 2D->3D PnP transformation estimation (Vis/EstimationType=1) instead of 3D ->3D estimation (Vis/EstimationType=0).

The reason to use PnP with cameras is that it can use very far features (with very poor depth estimation) in the transformation estimation, resulting in better orientation estimation (far features are good for orientation estimation). When using 3D->3D estimation, we are limiting the feature matching in a limited radius, like max 10 cm error, thus filtering all far features with high depth noise and low accuracy, the orientation estimation is then not as good than with PnP. For PnP, far features with poor depth estimation won't affect very much the accuracy of the translation component as long as most features that are inliers are close (the far features will just help to get good orientation estimation).

cheers,
Mathieu