Safely Pruning LTM

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Safely Pruning LTM

CrinkleCat
Thank you for all your amazing work. I have a question about removing LTM nodes.

Suppose my robot lives in a room for a year. It makes sense, at that point, to open new rtabmap databases every few days. However, is there a safe way to prune/purge longterm memory offline in a single DB? If the LTM nodes are necessary links in a pose graph stored across both WM and LTM, then you can't naively just remove data from LTM. The ReduceGraph algorithm merges poses safely if they are found to be redundant during operation, but it would be nice to have a garbage collection feature that removes links that just have not been pulled into memory after N time or runs
Reply | Threaded
Open this post in threaded view
|

Re: Safely Pruning LTM

matlabbe
Administrator
You can read the end of the discussion in this paper.
Finally, the graph reduction approach can reduce
significantly the number of nodes and links saved in
LTM to reduce TPP planning time. However, because
of dynamic events or the lack of features (e.g., Fig.
10e), new nodes and links will inevitably be added to
LTM over time when revisiting the same areas. As an
improvement, nodes with featureless image could be
merged through a maximum density threshold like in
(Milford and Wyeth, 2010), as they cannot be used for
loop closure detection. After applying graph reduction
on the experimental data, there are still 3068 featureless
nodes of 6059 nodes in the global graph, which would
reduce by about 50% the remaining graph. However,
even by limiting the rate at which the LTM grows, a
continuous SLAM approach in unbounded dynamic en-
vironments will always add new data over time. A com-
plementary strategy would be to definitely forget some
parts of the global map, at the cost of not being able
to return to some locations.
and maybe more related to your setup, for a robot moving in the same room/house/apartment the whole year, here from this other paper:
Multi-session seems a valid approach to improve visual re-localization robustness to illumination changes
in indoor environments. The dataset used in this paper is however limited to one day. Depending whether
it is sunny, cloudy or rainy, or because of variations of artificial lighting conditions in the environment or
if curtains are open or closed, more mapping sessions would have to be taken to keep high re-localization
performance over time. During weeks or months, changes in the environment (e.g., furniture changes,
items that are being moved, removed or added) could also influence performance. Continuously updating
the multi-session map to adapt over time to environment changes could be a solution (Labb´e and Michaud,
2017) which however, as the results suggest, would require more RAM even if graph reduction is enabled.
For very long-term continuous multi-session mapping, a solution using RTAB-Map could be to enable its
memory management approach (Labb´e and Michaud, 2013), which would limit the size of the map in
RAM. Another complementary approach to graph reduction could be to remove offline nodes on which
the robot did not re-localize for a while (like weeks or months). Each node in the map would have to
keep information about when was the last time a new frame has been re-localized on it. For example, if
a room in the house has been renovated or redecorated, the robot could eventually definitely “forget” the
old room images while keeping only the new ones. Similarly, the more formal probabilistic approach to
model feature persistence from (Rosen et al., 2016) could also be integrated to remove features from the
map that have “vanished” over time from the environment.
Such "forgetting" approach has not been implemented in rtabmap... yet. That could be done by a script reading the database and modifying the graph accordingly. For a density filtering approach, all data may be already there, as only the stamps/IDs are needed to know which nodes are older than others. As mentioned in the previous quote, if nodes can also keep the last time a loop closure happened on them, that could be useful to avoid removing good nodes for localization.