How does RGBD Slam works?

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

How does RGBD Slam works?

SirMinsk
Hello!

Does anyone know a source where RGB-D SLAM, which is used in rtabmap, is explained, from the images to the trajectory and pointcloud, in an easy way? I read many papers, but my image processing skills are very bad, so I don't understand much. Or can someone explain it to me briefly? The explanation has not to be very scientific, just to get a overview if you want use it.
What I understand is that the first step is the feature detection on a new rgb image, than a comparison of the detected features between the new and the old image. Then, the detected keypoints depth data are taken from the 3D image and processed in the  RANSAC step?
I would be very grateful.


best regards
Lucas
Reply | Threaded
Open this post in threaded view
|

Re: How does RGBD Slam works?

matlabbe
Administrator
There are many ways to use RGBD images to do SLAM, I will limit here to what is done in RTAB-Map. RGBD-SLAM in RTAB-Map is split in two steps: the front-end (odometry) and the back-end (loop closure detection and graph optimization).

Front-end:
-> constant time complexity
-> Fast ~20 Hz
-> Extract visual features in the RGB image, get the depth of these features using the Depth image, than do a RANSAC rigid transformation estimation with the previous image using the corresponding 3D features (correspondences are found by matching 2D visual features between the RGB images).

Back-end:
-> linear complexity (when memory management of RTAB-Map is disabled)
-> Don't have to be done at high framerate, so 1 Hz is used by default
-> The graph is created here, where each node contains RBG and depth images with corresponding odometry pose. The links are transformation between each node. When the graph is updated, RTAB-Map compares the new image with all previous ones in the graph to find a loop closure. When a loop closure is found, graph optimization is done to correct the poses in the graph.

Visualization:
-> For each node in the graph, we generate a point cloud from the RGB and depth images. This point cloud is transformed using the pose in the node. The 3D map is then created.
Reply | Threaded
Open this post in threaded view
|

Re: How does RGBD Slam works?

SirMinsk
Thank you!
That is exactly what I was looking for.
Reply | Threaded
Open this post in threaded view
|

Re: How does RGBD Slam works?

longchao
In reply to this post by matlabbe
Hi,matlabbe:
           I have encountered some problems,when I tried to write a odometry.After I extract visual features,I use knnMatch and  
cv::findFundamentalMat(which use RANSAC method to reject outliers) to filter matches in order to get robust features. And then I remove the keypoints whoes depth value is NaN. Finally,I found the good matches is less than 5. I think it is not approprite to use pcl svd method to get the transform.Can you give me some suggestions?THX:)
Reply | Threaded
Open this post in threaded view
|

Re: How does RGBD Slam works?

matlabbe
Administrator
There are some pipeline examples on the PCL registration API.

For example, the pcl::registration::CorrespondenceRejectorSampleConsensus class can be used to get a transformation between corresponding 3D points.