Re: localization with big db file
Posted by
matlabbe on
URL: http://official-rtab-map-forum.206.s1.nabble.com/localization-with-big-db-file-tp9851p9892.html
Hi,
I compared with a large map that I made recently (note that I removed the raw image data, from 14GB to 1.6GB):
Version: 0.21.3
Sessions: 4
Total odometry length:7949.869629 m
Total time: 11374.330171s
LTM: 10351 nodes and 3486621 words (dim=32 type=8U)
WM: 9037 nodes and 3477496 words
Global graph: 10351 poses and 27140 links
Optimized graph: 9037 poses (x=27140->-14, y=145->-75, z=53->0)
Maps in graph: 4/4 [0(2295), 1(1773), 2(2267), 3(2702)]
Ground truth: 0 poses
GPS: 0 poses
Links:
Neighbor: 18066
GlobalClosure: 3578
LocalSpaceClosure:1552
LocalTimeClosure: 0
UserClosure: 3944
VirtualClosure: 0
NeighborMerged: 0
PosePrior: 0
Landmark: 0
Gravity: 0
Database size: 1653 MB
Nodes size: 1 MB (0.10%)
Links size: 9 MB (0.57%)
RGB Images size: 0 Bytes (0.00%)
Depth Images size: 0 Bytes (0.00%)
Calibrations size: 16 MB (0.99%)
Grids size: 442 MB (26.73%)
Scans size: 0 Bytes (0.00%)
User data size: 0 Bytes (0.00%)
Dictionary size: 208 MB (12.59%)
Features size: 1045 MB (63.23%)
Statistics size: 12 MB (0.78%)
For 7 times less distance, you got 3 times more nodes and a dictionary of 8M words. Not sure what the robot is doing at what speed, but do you really need Rtabmap/DetectionRate=10 and Kp/MaxFeatures=2000? For the actual issue, on this database above (using default parameters), for 3.5M, I see RAM usage increasing to 8.7 GB. For 8M, I guess it could go over 16 GB of RAM like you observed. As you are using binary features, you may set parameter Kp/ByteToFloat to true:
Param: Kp/ByteToFloat = "false" [For Kp/NNStrategy=1, binary descriptors are converted to float by converting each byte to float instead of converting each bit to float. When converting bytes instead of bits, less memory is used and search is faster at the cost of slightly less accurate matching.]
On my computer, doing so I get 5.8 GB instead of 8.7 GB. I would suggest however that you reprocess your database with Kp/MaxFeatures=500 (default) to cut approximatly by 4 the dictionary size (2M instead of 8M):
rtabmap-reprocess --Kp/MaxFeatures=500 rtabmap.db output.db
Another option would be to use a
fixed-size dictionary of 1M words for example.
cheers,
Mathieu