The ground truths of the RGB-D datasets were taken using an external motion capture system (e.g., MotionAnalysis or VICON). Unless you have a motion capture system, it may be difficult to create a ground truth indoor.
If you want to test RTAB-Map with the hand-held RGB-D datasets, you can do it with the standalone version and the tgz archives (following example with rgbd_dataset_freiburg3_long_office_household.tgz).
$ cd rgbd_dataset_freiburg3_long_office_household $ python associate.py rgb.txt depth.txtThis will create "rgb_sync" and "depth_sync" directories with both the same number of images.
%YAML:1.0 camera_name: rgbddatasets image_width: 0 image_height: 0 camera_matrix: rows: 3 cols: 3 data: [ 525., 0., 3.1950000000000000e+02, 0., 525., 2.3950000000000000e+02, 0., 0., 1. ] distortion_coefficients: rows: 1 cols: 5 data: [ 0., 0., 0., 0., 0. ] rectification_matrix: rows: 3 cols: 3 data: [ 1., 0., 0., 0., 1., 0., 0., 0., 1. ] projection_matrix: rows: 3 cols: 4 data: [ 525., 0., 3.1950000000000000e+02, 0., 0., 525., 2.3950000000000000e+02, 0., 0., 0., 1., 0. ]
Start mapping and when the processing is finished (and stopped), you can export the poses in RGB-D dataset format with the menu Edit->Advanced->Export poses... -> RGB-D SLAM format. With the exported txt file, you can compare it with the ground truth online here.
Note that default RTAB-Map's parameters may not work well for some datasets where the camera is moving fast like the freiburg1_360 dataset. Using optical flow odometry strategy may reduce the chance of getting lost in this particular dataset. The freiburg3_long_office_household works well, and I've tested it using the ROS bag too with rgbdslam_datasets.launch.
For the robot SLAM datasets, which have odometry and laser scans from the robot, this could be done with the ROS package and using the ROS bags... but I didn't give a try yet.
cheers,Free forum by Nabble | Edit this page |