Hi Sophye,
I'll try to answer this question, but I'm not very familiar with the
RGB-D SLAM v2 package. The GUI and the results are somewhat similar to RTAB-Map. So with both, you can get a 3D point cloud of the environment with only a Kinect.
For the integration with a robot, I think RTAB-Map may be easier by its flexibility on which sensors you are using. This page shows some
configurations that you can use to create your own launch file for Turtlebot2. For example for Turtlebot, you may want a configuration like "
Kinect + Odometry + Fake 2D laser from Kinect". This configuration was used in this video
https://www.youtube.com/watch?v=_qiLAWp7AqQ, if it is something like this that you want. A 2D map is also created for navigation.
What is the goal of creating a 3D map in your project? Will it be used for robot navigation? or robot teleoperation? or just for visualization? Well, if it is just for visualization, you don't need a robot at all, just use a hand-held Kinect and compare the results between
RGB-D SLAM v2 and RTAB-Map:
(Don't launch them at the same time!)
RGB-D SLAM v2:
$ roslaunch rgbdslam openni+rgbdslam.launch
RTAB-Map:
$ roslaunch freenect_launch freenect.launch depth_registration:=true
$ roslaunch rtabmap rgbd_mapping.launch
For navigation in a 3D map, you will need an
octomap. Octomap seems more integrated in
RGB-D SLAM v2 than in RTAB-Map. However, RTAB-Map generates a 3D point cloud, this point cloud can be converted to an octomap like in this
post.
For teleoperation, the
Demos section on rtabmap_ros page shows some examples.
Cheers,
Mathieu