More loops isn't always better

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

More loops isn't always better

Pierre
Hi Mathieu,

I have come to notice that very often, loop closures can be detrimental if there are too many of them. To the point that I have not stopped using the detectMoreLoops tool, going for manual frame by frame loops using the databaseViewer instead. For huge projects though (> 100,000 nodes), it is too much work, hence I have no real solution right now.

For all my big projects I've been using an iPhone, so that registration is done with PnP using the ARKit depth map. Knowing how little precision those depth maps have, it makes sense that although a global loop can be very helpful to solve a large drift, their covariance is pretty high and too many of them will overtake the effect of odometry.

I have been trying to understand what is happening and how to possibly find a solution to the issue. To explain a bit my situation, here is a list of cases that happen in my scans, which require different density of loop closures :

- when scanning a "straight" gallery, no extra loop can beat ARKit odometry so that usually, the loops between node n and node n+1 detected by detectMoreLoopClosures are just detrimental. I have made many tests and I have come to the conclusion that raw odometry is in that case better than a reprocessed database, whatever parameters I chose

- when scanning a very large area which requires to walk all around the place several times to cover it fully, needed many loops, the output of rtabmap-reprocess is usually very relevant and is able to detect most important global loops. If some are missing, one iteration of detectMoreLoops with strict parameters helps. If more are still missing, more iterations are again helpful in the way that "double walls" tend to disappear, however the density of global loops gets larger and larger until it overpowers odometry and although local geometry is fine, the global graph is deformed and impossible to georeference

- when multi-session-scanning a place where some galleries appear twice in the final merged database, it can be that this results in a double-gallery if rtabmap-reprocess hasn't figured out that they are the same. Adding one manual loop with rtabmap-databaseViewer works very well in that case. On the other hand, rtabmap-detectMoreLoopClosures will typically add loops "path1[i] -> path2[i] for many i's" if that makes sense. Several iterations will add loops "path1[i] -> path2[i+1]" and so on, until the loops overpower the odometry of the two paths.

- when scanning a huge area which is a mix of straight galeries, larger places, intersections, multi-session scan with some overlapping, possibly challenging loop closures, all above issues appear. A simple reprocessing isn't usually enough to close the necessary loops, and repeted iterations of detectMoreLoopClosures will eventually deform the final graph. Several tests show huge angular drifts due to one iteration of detectMoreLoops, leading to hundreds of meters of error on the final graph although the odometry graph was a very good initial guess.

Now here are my attempts at understanding what is going on :

- hundreds of thousands of global loops will overtake the weighted cost in graph optimisation. Hence, a way to deal with the situation could be to increase global loop covariance the more you add loops. I have tried to put 9999 covariance on global loops in my example mentionned above where the final drift about a hundred meters, and the result was almost no improvement. Very far from the intial odometry graph.

- another concern I have is about the computation of the links' covariance. I was initially expecting a covariance on rotation R and translation t derived from PnP reprojection Jacobians. In practice, the covariance associated with PnP links appears to be a diagonal, heuristic estimate based on median residual errors, with translation and rotation treated independently. As a result, correlations between rotation and translation are ignored, and the conditioning of the PnP problem itself is not reflected in the information matrix. I'm wondering why Vis/PnPSplitLinearCovComponents is set to False by default. More generally, I would be interested in your thoughts on whether this covariance approach is appropriate for very large graphs with dense loop closures, or if a more probabilistic PnP uncertainty model could help mitigate the undesired effects.


Finally here are some ideas on how to adress the issue. They shouldn't be taken as "suggestions", rather, attempts at spotting the possible sources of errors, and inputs for discussion.

- in some way, part of the problem isn't really about individual global loop covariance but more about local accumulated covariance. If one wishes that odometry links overpower global loop closures for local geometry, then the local contribution of a node to the graph optimisation should be a sum of the odometry contribution and the global loops contribution, normalized by the number of global loops in a local neighborhood ?

- if many iterations of detectMoreLoopClosures eventually does find all necessary loops, but one ends up with too many of them, a "filter" could be applied to get rid of the redundant ones. Typically, two nodes which are close to each other in the odometry graph shouldn't be globally looped. More generally, two nodes which are already close in the full graph shouldn't be concerned with any extra loops. A way of filtering could be to run over all nodes, intersect the graph with a spatial neighborhood of fixed radius around that node and greedily remove global loops according to covariance as long as the intersected graph doesn't get disconnected.
Another less brutal approach would be to set parameters like "LoopClosureMinLength" which allows the user to specify the minimum distance between two nodes in the graph before a new loop can be added. This sounds to me like a simple option which could adress most of the issues.


I'm sorry for this very long post, I hope it is worth your time reading it. I thank you for any discussion on this matter :)

Pierre
Reply | Threaded
Open this post in threaded view
|

Re: More loops isn't always better

matlabbe
Administrator
Hi Pierre, here some of my thoughts

Pierre wrote
- when scanning a "straight" gallery, no extra loop can beat ARKit odometry so that usually, the loops between node n and node n+1 detected by detectMoreLoopClosures are just detrimental. I have made many tests and I have come to the conclusion that raw odometry is in that case better than a reprocessed database, whatever parameters I chose

- when scanning a very large area which requires to walk all around the place several times to cover it fully, needed many loops, the output of rtabmap-reprocess is usually very relevant and is able to detect most important global loops. If some are missing, one iteration of detectMoreLoops with strict parameters helps. If more are still missing, more iterations are again helpful in the way that "double walls" tend to disappear, however the density of global loops gets larger and larger until it overpowers odometry and although local geometry is fine, the global graph is deformed and impossible to georeference
If loop closures make it worst, it could be because the loop closures' covariance is too confident (too small) and/or odometry covariance is not enough confident (too large). For example, with rtabmap-databaseViewer, you can change the covariance of all neighbor links to something very small (Edit->Update all neighbor covariances...), if close to 0, the loop closures won't bend the odometry constraints at all. At the inverse, you can experiment with larger covariance for loop closure constraints (Edit->Update all loop closure covariances...).

Pierre wrote
- when multi-session-scanning a place where some galleries appear twice in the final merged database, it can be that this results in a double-gallery if rtabmap-reprocess hasn't figured out that they are the same. Adding one manual loop with rtabmap-databaseViewer works very well in that case. On the other hand, rtabmap-detectMoreLoopClosures will typically add loops "path1[i] -> path2[i] for many i's" if that makes sense. Several iterations will add loops "path1[i] -> path2[i+1]" and so on, until the loops overpower the odometry of the two paths.
There could be improvements to make rtabmap-detectMoreLoopClosures adding more intelligently loop closures, assuming that odometry constraints are always better. For example, we could define a new parameter that will set a minimum distance in terms of nodes between two nodes checked for loop closure. It is different than the current parameter setting the minimum radius (-rx), which doesn't allow two close nodes (in optimized graph) taken a long time apart to produce a loop closure. In your case, maybe the odometry covariance is too large, see previous comment try with smaller one.

Pierre wrote
- when scanning a huge area which is a mix of straight galeries, larger places, intersections, multi-session scan with some overlapping, possibly challenging loop closures, all above issues appear. A simple reprocessing isn't usually enough to close the necessary loops, and repeted iterations of detectMoreLoopClosures will eventually deform the final graph. Several tests show huge angular drifts due to one iteration of detectMoreLoops, leading to hundreds of meters of error on the final graph although the odometry graph was a very good initial guess.

Now here are my attempts at understanding what is going on :

- hundreds of thousands of global loops will overtake the weighted cost in graph optimisation. Hence, a way to deal with the situation could be to increase global loop covariance the more you add loops. I have tried to put 9999 covariance on global loops in my example mentionned above where the final drift about a hundred meters, and the result was almost no improvement. Very far from the intial odometry graph.
Don't put 9999 covariance, but try 9998. There are many places in the code that will detect >=9999 as invalid or will adjust which kind of constraints are added in the optimizer (for example, we can treat a prior with 9999 rotational covariance as a EdgeXYZ constraint instead o EdgeXYZRPY, thus ignoring completely the rotation in the optimization). Setting covariance that high could generate issues in the optimizer where constraints are underestimated and won't converge at all. However, in theory, in single session maps, setting very covariance on loop closure links should make the optimizer converging only on the original odometry poses. For multi-session maps, at least one loop closure link between two different sessions should have appropriate covariance, otherwise the optimizer would likely diverge.

Pierre wrote
- another concern I have is about the computation of the links' covariance. I was initially expecting a covariance on rotation R and translation t derived from PnP reprojection Jacobians. In practice, the covariance associated with PnP links appears to be a diagonal, heuristic estimate based on median residual errors, with translation and rotation treated independently. As a result, correlations between rotation and translation are ignored, and the conditioning of the PnP problem itself is not reflected in the information matrix. I'm wondering why Vis/PnPSplitLinearCovComponents is set to False by default. More generally, I would be interested in your thoughts on whether this covariance approach is appropriate for very large graphs with dense loop closures, or if a more probabilistic PnP uncertainty model could help mitigate the undesired effects.
The covariance model could indeed be improved. We made one based on the 3D correspondences between the features as opencv's solvePnPRansac doesn't provide any feedback (or internal computed errors on the transform) from its optimization. Vis/PnPSplitLinearCovComponents is false because it was probably an experimental test with which I didn't see significant difference with the previous approach. Note that my experience with covariance tuning is that there will be always an event or external factors in the real-world that will make the computed covariance (even if mathematically correct) fails because it underestimated or overestimated the real error (by lack of necessary features to actually converge to real global minimum). On the iOS app, the odometry covariance is currently eye-balled and fixed, because ArKit doesn't provide any error in their pose estimation. It could be lowered, but ArKit drift is not constant, so we cannot make it too small to avoid over confidence that will reject any loop closure detection down the road.

Pierre wrote
Finally here are some ideas on how to adress the issue. They shouldn't be taken as "suggestions", rather, attempts at spotting the possible sources of errors, and inputs for discussion.

- in some way, part of the problem isn't really about individual global loop covariance but more about local accumulated covariance. If one wishes that odometry links overpower global loop closures for local geometry, then the local contribution of a node to the graph optimisation should be a sum of the odometry contribution and the global loops contribution, normalized by the number of global loops in a local neighborhood ?

- if many iterations of detectMoreLoopClosures eventually does find all necessary loops, but one ends up with too many of them, a "filter" could be applied to get rid of the redundant ones. Typically, two nodes which are close to each other in the odometry graph shouldn't be globally looped. More generally, two nodes which are already close in the full graph shouldn't be concerned with any extra loops. A way of filtering could be to run over all nodes, intersect the graph with a spatial neighborhood of fixed radius around that node and greedily remove global loops according to covariance as long as the intersected graph doesn't get disconnected.
Another less brutal approach would be to set parameters like "LoopClosureMinLength" which allows the user to specify the minimum distance between two nodes in the graph before a new loop can be added. This sounds to me like a simple option which could adress most of the issues.
Looks like I should have read your whole post before starting answering above. I think "LoopClosureMinLength" is exactly what I was think about earlier.

Thanks for the feedback about rtabmap-detectMoreLoopClosures, I created a ticket about adding a parameter like "LoopClosureMinLength".