Hi Mathieu,
We want to update specific area of the map (not covered well or environment changed).
I see 2 options to do it:
1. generate new database and append it to the original using post-processing
2. running rtab in the append mode.
Any insights on what option is better?
Can we assume that during the append original nodes are not deleted and users can continue sending robot to same node id to achieve the goal? or users need to update all their goals from scratch?
How are the goals saved? If you are using "node IDs" from the graph as "goals", then you can do both of your solutions without extra steps. You should be able to navigate to old goals with the new assembled map.
If goals are fixed metric poses like "x=32, y=54", then one issue that could happen is that the old graph may move a little when appending new data to it (because we are still optimizing the whole graph with any new loop closure). A workaround is to "fix" the nodes of the the first map to make sure they don't move when appending new data. To "fix" nodes, that can be done by adding a "prior" constraint with very low covariance to nodes of the first map using the current optimized poses (a python script using sqlite3 directly on the database can be created for that purpose). On graph optimization, the nodes will be locked on place, just moving the new nodes around. You then make sure the assembled map keeps the same origin and the trajectory is aligned with original map.
Your 2 choices are mostly the same (will give same result), though 2 is done online instead of offline like choice 1. Doing it online can however help to guide a person teleoperating the robot to know that 1) the robot can localize in first map, and 2) to close loop closures with old parts of the maps. Otherwise with choice 1, the operator is kinda mapping "blindly" without knowing that the resulting map can correctly be appended to first map.
My goals are fixed matric poses. I tried to follow your suggestion of adding prior constraint in the form of very low covariance values, but I might have not done it correctly.
What I assumed I should do is in the "Link" table, update the major diagonal of the "information_matrix". the value I set there was 0.0001 which I thought is considered low. I did it to all the entries in table.
1) Is it what you meant?
2) After merging (using rtabmap-reprocess) the map with prior constraint to a new map, the merged map does not have the low values I set in the major diagonals anymore. Is it expected? Should've I used a different method to merge the maps?
1) yes, but the values should be 10000 or more (information matrix is the inverse of the covariance matrix)
2) Unless option -nopriors is used, the priors should be republished in the new database. Are priors there in the output database but with different values?
I tried values or 100,000 and 999,999. I don't see these values after the merge.
How could I know if these are the same priors or not? According to the IDs?
You can double check the covariance afterwards with rtabmap-dababaseViewer. Look at the Prior field and mouse over on it, you will see the covariance matrix:
I finally understood how to do it and managed to replicate what you showed in the screenshot.
I've iterated over the Node table and for each node, added a prior link in the Link table, where the transformation is the pose from the Node table.
But is it actually what we want? We used odometry pose in the priors. Shouldn't we use the optimized pose from the original map instead?