Re: RGBD Outdoor Mapping - Offline Database Processing - Localization
Posted by DavideR on
URL: http://official-rtab-map-forum.206.s1.nabble.com/RGBD-Outdoor-Mapping-Offline-Database-Processing-Localization-tp5258p5280.html
Thank You Mathieu for your support!
I've tried to figure out how RTAB Localization mode should work. I'm not sure if I completely understood what is going on. Could You assure me that I got it right?
So, Localization mode is based on the same algorithm which is used for loop closure detection during mapping. Therefore when I move the robot in an already mapped enviroment, the RGB-D images streamed by the camera are used for appearance based place recognition. This is done through the bag of visual words technique.
[Here I am not sure how the depth information comes into play..
1.Every keyframe's feature stored in the map has the info of its depth?
2.Is the depth of the features used for estimating the pose of the camera through SFM?
3.The point cloud should not be necessary when localizing through RGB visual words approach. Is it true anyway that the pc can be exploited to refine the pose estimate by ICP processing?]
The localization output is a transform map->odom, so that it should estimate the position of the robot within the map.
Given your experiences, what should I expect? (if good overall conditions are met)
A. A reliable "continuous" global localization.
B. Just a "discrete" global localization useful just for "localization recovery" in case of other locating methods failures (I'm thinking about kidnapped robot occurrences).
Can I use the appearance based localization to locate the robot within a 2D occupancy map?
Excuse me for my naive questions..I'm quite a newbie in this field.
Thank you again for your support
Cheers