More loops isn't always better

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

More loops isn't always better

Pierre
Hi Mathieu,

I have come to notice that very often, loop closures can be detrimental if there are too many of them. To the point that I have not stopped using the detectMoreLoops tool, going for manual frame by frame loops using the databaseViewer instead. For huge projects though (> 100,000 nodes), it is too much work, hence I have no real solution right now.

For all my big projects I've been using an iPhone, so that registration is done with PnP using the ARKit depth map. Knowing how little precision those depth maps have, it makes sense that although a global loop can be very helpful to solve a large drift, their covariance is pretty high and too many of them will overtake the effect of odometry.

I have been trying to understand what is happening and how to possibly find a solution to the issue. To explain a bit my situation, here is a list of cases that happen in my scans, which require different density of loop closures :

- when scanning a "straight" gallery, no extra loop can beat ARKit odometry so that usually, the loops between node n and node n+1 detected by detectMoreLoopClosures are just detrimental. I have made many tests and I have come to the conclusion that raw odometry is in that case better than a reprocessed database, whatever parameters I chose

- when scanning a very large area which requires to walk all around the place several times to cover it fully, needed many loops, the output of rtabmap-reprocess is usually very relevant and is able to detect most important global loops. If some are missing, one iteration of detectMoreLoops with strict parameters helps. If more are still missing, more iterations are again helpful in the way that "double walls" tend to disappear, however the density of global loops gets larger and larger until it overpowers odometry and although local geometry is fine, the global graph is deformed and impossible to georeference

- when multi-session-scanning a place where some galleries appear twice in the final merged database, it can be that this results in a double-gallery if rtabmap-reprocess hasn't figured out that they are the same. Adding one manual loop with rtabmap-databaseViewer works very well in that case. On the other hand, rtabmap-detectMoreLoopClosures will typically add loops "path1[i] -> path2[i] for many i's" if that makes sense. Several iterations will add loops "path1[i] -> path2[i+1]" and so on, until the loops overpower the odometry of the two paths.

- when scanning a huge area which is a mix of straight galeries, larger places, intersections, multi-session scan with some overlapping, possibly challenging loop closures, all above issues appear. A simple reprocessing isn't usually enough to close the necessary loops, and repeted iterations of detectMoreLoopClosures will eventually deform the final graph. Several tests show huge angular drifts due to one iteration of detectMoreLoops, leading to hundreds of meters of error on the final graph although the odometry graph was a very good initial guess.

Now here are my attempts at understanding what is going on :

- hundreds of thousands of global loops will overtake the weighted cost in graph optimisation. Hence, a way to deal with the situation could be to increase global loop covariance the more you add loops. I have tried to put 9999 covariance on global loops in my example mentionned above where the final drift about a hundred meters, and the result was almost no improvement. Very far from the intial odometry graph.

- another concern I have is about the computation of the links' covariance. I was initially expecting a covariance on rotation R and translation t derived from PnP reprojection Jacobians. In practice, the covariance associated with PnP links appears to be a diagonal, heuristic estimate based on median residual errors, with translation and rotation treated independently. As a result, correlations between rotation and translation are ignored, and the conditioning of the PnP problem itself is not reflected in the information matrix. I'm wondering why Vis/PnPSplitLinearCovComponents is set to False by default. More generally, I would be interested in your thoughts on whether this covariance approach is appropriate for very large graphs with dense loop closures, or if a more probabilistic PnP uncertainty model could help mitigate the undesired effects.


Finally here are some ideas on how to adress the issue. They shouldn't be taken as "suggestions", rather, attempts at spotting the possible sources of errors, and inputs for discussion.

- in some way, part of the problem isn't really about individual global loop covariance but more about local accumulated covariance. If one wishes that odometry links overpower global loop closures for local geometry, then the local contribution of a node to the graph optimisation should be a sum of the odometry contribution and the global loops contribution, normalized by the number of global loops in a local neighborhood ?

- if many iterations of detectMoreLoopClosures eventually does find all necessary loops, but one ends up with too many of them, a "filter" could be applied to get rid of the redundant ones. Typically, two nodes which are close to each other in the odometry graph shouldn't be globally looped. More generally, two nodes which are already close in the full graph shouldn't be concerned with any extra loops. A way of filtering could be to run over all nodes, intersect the graph with a spatial neighborhood of fixed radius around that node and greedily remove global loops according to covariance as long as the intersected graph doesn't get disconnected.
Another less brutal approach would be to set parameters like "LoopClosureMinLength" which allows the user to specify the minimum distance between two nodes in the graph before a new loop can be added. This sounds to me like a simple option which could adress most of the issues.


I'm sorry for this very long post, I hope it is worth your time reading it. I thank you for any discussion on this matter :)

Pierre