Loop closure registration strategies - Help

Posted by DavideR on
URL: http://official-rtab-map-forum.206.s1.nabble.com/Loop-closure-registration-strategies-Help-tp6421.html

Hi Mathieu,
I ask for some explanations about the registration strategies after a visual loop closure
detection.
I am sorry but even after reading your exhaustive papers, still I am not sure about what's
going on depending on the selected registration strategy.

Reg/Strategy = 1 (Geometry ICP)

I am not sure if even with this strategy, the visual registration (3Dto2D PnP RANSAC) must
occur before any scan cloud ICP registration.
If not, could you confirm me that the motion registration is made just upon the estimate of the
transform between the scan cloud stored in the candidate loop closure node and the current
scan cloud.

Loop Closure Detection and Visual Registration

For a better loop closure detection performances I think the best way to extract features from
images to update the vocabulary is to scan the entire view without using a depth mask. This
could lead to detect more far features for example in case of distant buildings in the
background.
Once a loop closure is detected though, if a 3Dto2D visual registration strategy is required, it
is necessary to match the features that actually have associated a depth value.
So for visual registration it is preferable to extract a sufficient number of feature in the
foreground where a valid depth value actually is supposed to exist so somehow extracting
more features using a depth mask.
In case of a loop closure hypothesis, will the features in the recalled candidate image be
again extracted using the parameters set in the visual registration panel (for example using depth mask)?

I'm going to describe an outdoor scenario where I'm facing some problems to finally compute
the transform and accept the loop closure:
A. A right visual loop closure hypothesis is found (I can see that the recalled image depicts
the same revisited place).
B. The only features to match that have a valid depth value are the ones in the first 5-7
meters in front of the robot where there's only asphalt.
C. Consequently it is quite hard to get a sufficient number of RANSAC inliers, especially in
case of flat asphalt without markings.
D. Any loop closure is then rejected even though the visual recall matched.
Is any way to get a workaround for this problem?

Depth by structure from motion

I'm wondering if there's any way to estimate the depth information of far features (over the Max depth of the stereo camera) by exploiting structure from motion algorithms in order to get more landmarks on the map so to have more pnp inliers for the camera registration.


Thank you a lot for your support!

Cheers