Hey!
What I want do is to benchmark the odometry accuracy of the different techniques. So the deviation of the assumed position to the real position. Is there a way to do that already in rtabmap like in the benchmark tutorial? Best regards Michael |
Administrator
|
This post was updated on .
No there is no actual benchmark included in rtabmap to benchmark odometry. In order to benchmark, a ground truth is required. Unless you have a localization system (like a Vicon) to do your own ground truth, these rgdb rosbags could be used. I've done a small test launch file (rgdbslam_datasets.launch) to use these rosbags with rtabmap. The example shows the ground truth TF along the TF computed by odometry/rtabmap in RVIZ, but no actual comparison data is taken. Though it may be possible to do it by comparing the TF (relative and absolute errors).
I've tested this launch file with the "freiburg3_long_office_household" rosbag sequence: $ wget http://vision.in.tum.de/rgbd/dataset/freiburg3/rgbd_dataset_freiburg3_long_office_household.bag $ rosbag decompress rgbd_dataset_freiburg3_long_office_household.bag $ roslaunch rtabmap_ros rgbdslam_datasets.launch feature:=6 $ rosbag play --clock rgbd_dataset_freiburg3_long_office_household.bagwhere argument "feature" can take any of these values "0=SURF 1=SIFT 2=ORB 3=FAST/FREAK 4=FAST/BRIEF 5=GFTT/FREAK 6=GFTT/BRIEF 7=BRISK." (rtabmap's name is "Odom/FeatureType") |
ok thank you for your replay! I'll see what I can do with that.
best regards Michael |
Hi!
My problem is that I can't run the rtabmap ros version on my NVIDIA Jetson Board, so I'm forced to use the standalone version of rtabmap and I guess the rosbags are not an option then. I have two new Ideas. First to use the rgb and depth image which are provided by the webpage you mentioned, to put them in a database file and process as usually with rtabmap to get the trajectory. Then I can compare my trajectory data with the data offered by them. But If I look inside of a database file, then there is much more stuff inside. So I guess just to have the images in the database is not enough? The second Idea is to use the rgb and depth images as a input for rtabmap instead of the kinect camera. But I'm not sure where the best point in code is to implement that. Do you have a hint for the case that this i anyway possible? Thank you in advance! best regards Michael |
Administrator
|
You can record alone RGB image, depth image and camera info topics to a RTAB-Map's database if you want (the data_recorder node can be used for that). The database can be after selected as source in the standalone version. See data_recorder.launch as an example of usage of the data_recorder node (recording data throttled at 10 Hz):
$ roslaunch openni_launch openni.launch depth_registration:=true $ roslaunch rtabmap_ros data_recorder.launchOn "crtl-c", the database is saved to "~/.ros/output.db", you can verify that the images are saved in the database with this tool: $ rtabmap-databaseViewer ~/.ros/output.dbWith the ROS bags, you may have to remap the topic names in the launch file. However, it may be difficult to compare easily with the ground truth. You could simulate odometry messages using the TF ground truth and connect the odometry msg to data_recorder node (set param "subscribe_odometry=true" and remap "odom" topic). Each image will then be registered to a pose in the database (the poses can be ignored when reading the database as input source). Note that images can be recorded faster than 10 Hz if your computer can run visual odometry over 10 Hz. |
Free forum by Nabble | Edit this page |