simulating input of a kinect camera

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

simulating input of a kinect camera

DocBrowm
Hello!

I want make a speed and quality slam benchmark with different settings. For that I always need the same record from my kinect camera. My idea was to make one raw record and always use this as simulated input for rtabmap to compare different scenarios. In the preferences is this database as source part. Can I use that for my purpose? Or is there an other way?

Thank you in advance!

Michael
Reply | Threaded
Open this post in threaded view
|

Re: simulating input of a kinect camera

matlabbe
Administrator
This post was updated on .
Hello Michael,
The database source can be used for that.

Record some data

  1. Start rtabmap as for usual mapping session:
  2. $ rtabmap
    
  3. "File -> New database..."
  4. Select your preferred camera driver :
    • "Detection -> select source -> RGB-D camera..."
  5. Press "play", then "pause" after the mapping process has begun
  6. "Edit -> Data recorder..." :
    • Select the output database name (default "output.db")
    • "Save in RAM?", select "yes" ("no" will make recording to hard drive, which may add some latency)
    • The "Data recorder" window should be opened
  7. Press "pause" again to continue the mapping
    • You should see recorded images in the "Data recorder" window.
  8. Close the "Data recorder" window to stop recording and the database will be saved on the hard drive.

Mapping using this recorded database as the source:

  1. Press "Stop", then "Edit -> Delete memory" to restart with a clean memory
  2. "Detection -> select source -> Database...":
    • Select the recorded "output.db"
    • [EDIT] For the "Use odometry saved in database?" message box, press "no"
  3. Press start
    • For the "Incompatible frame rates!" message box, press "yes"
    • For the "Some images may be skipped!" message box, press "yes"
  4. You are now mapping from the previously recorded data!

ROS

In ROS, you could use the rosbag mechanism to record some data and replay them. You could also use RGB-D SLAM datasets to compare with a ground truth. Still experimental, I've created a launch file (rgbdslam_datasets.launch) to use the RGB-D SLAM datasets and showing in RVIZ the estimated pose and the ground truth using TF. Example:
 $ wget http://vision.in.tum.de/rgbd/dataset/freiburg3/rgbd_dataset_freiburg3_long_office_household.bag
 $ rosbag decompress rgbd_dataset_freiburg3_long_office_household.bag
 $ roslaunch rtabmap rgbdslam_datasets.launch
 $ rosbag play --clock rgbd_dataset_freiburg3_long_office_household.bag
Regards,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: simulating input of a kinect camera

DocBrowm
Hey! Thank for your reply, it works fine.

One problem is with this method, I cannot really simulate the ram usage. If you use the database as source, it seems to need less ram than with a camera. What I want do is to use rtabmap on a NVIDIA Jetson TK1 board which just has 1,8GB ram (~1GB is free after boot). So after a few seconds the ram is completely full and rtabmap crashes, that's not good if you want use it for a robot. Is there a way to change the camera resolution or to use grey scale to save memory? I played around with some parameters in the settings, but non of them could decrease the ram usage. Maybe I changed the wrong..

I'm always thankful for new ideas.

Best wishes
Michael  
Reply | Threaded
Open this post in threaded view
|

Re: simulating input of a kinect camera

matlabbe
Administrator
Hi Michael,

The standalone version comes with the GUI for visualization, node data are cached for point cloud generation, which takes a large part of memory. On a ROS robot setup, the GUI would be on another computer (using rtabmapviz or rviz), running only the rtabmap node on the robot (similar to this configuration). So with the GUI, you may want to increase the "3D cloud decimation" parameter for the map in "General Settings -> 3D Rendering" panel to save some memory from cloud generation. If you don't need to see the map built online, you can set parameter "Publish signature data" (Avanced RTAB-Map Settings) to false and call "Edit->Download all clouds" at the end of experiment to see the final map result.

So after limiting memory usage from the GUI, there some parameters in the core the limit the memory used. The parameters "Keep rehearsed locations" and "Using database in the memory..." can be set to false to save RAM on the core side (see panel "Avanced RTAB-Map Settings->Memory->Database"). You may want to decrease RTAB-Map loop closure detection rate (default 1 Hz): For example, at 0.5 Hz, there will be half nodes in the map than at 1 Hz.

RTAB-Map's memory management is dependent of "T_time" parameter (warning: the default value is 0, desactivated). When this threshold is used, it is assumed that real-time limit would be reached before memory limit (RAM), so "T_time" was set to 700 ms in examples on a robot, and RAM (over 3 GB on our robots) was never reached. However, if the maximum memory is reached before the real-time time threshold, the memory threshold can be used to limit the size of the Working Memory, so indirectly fixes the maximum RAM used:
 * GUI: Preferences->Advanced->RTAB-Map settings->"Maximum signatures allowed in Working Memory (0 means inf)"
 * ROS:
<param name="Rtabmap/MemoryThr" type="string" value="100"/>
However, activating the memory management of RTAB-Map influences the size of the active local map. See the related paper to understand this effect.

On a robot, you may have nodes like map_assembler or grid_map_assembler that convert the output of rtabmap to standard ROS messages like sensor_msgs/PointCloud2 or nav_msgs/OccupancyGrid, there is no mechanism yet to limit the size of the cache used by these nodes.

Regards,
Mathieu