RGBD Camera and 2D Lidar: Is there any way to merge scans?

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

RGBD Camera and 2D Lidar: Is there any way to merge scans?

Davor96
Hello,

currently, I am working on turtlebot 2(kobuki base) with A2 rplidar lidar and Orbbec Astra RGBD camera.
I made a launch file by combining tutorial from
http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot with  2.1 Kinect + Odometry + 2D laser and
launch file demo_turtlebot_mapping.launch from rtabmap_ros and added few new parameters.

I played with parameters and managed to get somewhat good results, but I want to use the precision of
the 2D lidar for map creation and navigation and RGBD camera for detecting obstacles that are below or
above the lidar plane of scan (even if that means to cut cameras view range to 1-2m).

If Grid/FromDepth parameter is set to false lidar is used for mapping and localization(local and global map)
but I can not get the robot to detect obstacles below or above the scan plane (because the camera is not publishing to local/global map)

If Grid/FromDepth parameter is set to true lidar is used just for navigation (local map) and RGBD camera
is used for mapping (global map) which is not good because the map made this way is bad (on simulation
works perfectly of course) and takes more time to make even if the view of the camera is set to max.
The map created this way is worst than robot using only an RGBD camera with depthimage_to_laserscan
package (also there is a problem with ground detection in the real-world which is out of the scope of this
topic and I plan to fix it without 3d raytracing but still even without this problem map is the worst).

One solution, that is somewhat good, is to make the map using lidar first (Grid/FromDepth parameter is
set to false), and then after the map is created switch to RGBD camera for mapping (Grid/FromDepth
parameter is set to true). The problem with this approach is that over time (while patroling) map will
degrade but at least it will have the same geometry as before.

The second problem is that when localization mode (memory can not be increased) is enabled the global
map can not change so the robot can not be aware of obstacles above or below the lidar plane.

To summarize:
1. Is there any way to merge lidar and RGBD camera to project map, if that does not exist is there an alternative?

2. Can obstacles, below or above the lidar plane, be detected while in localization mode?

Please, correct me if any of my assumptions are wrong. I would appreciate your help.

Here is launch file code:
<!-- prepravljeno s http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot i demo_turtlebot_mapping.launch iz rtabmap_ros-->

<launch>
  <arg name="database_path"     default="rtabmap.db"/>
  <arg name="rgbd_odometry"     default="false"/>
  <arg name="rtabmapviz"        default="false"/>
  <arg name="localization"      default="false"/>
  <arg name="simulation"        default="false"/>
  <arg name="sw_registered"     default="false"/>
  <arg     if="$(arg localization)" name="args"  default=""/>
  <arg unless="$(arg localization)" name="args"  default="--delete_db_on_start"/>

  <arg     if="$(arg simulation)" name="rgb_topic"   default="/camera/rgb/image_raw"/>
  <arg unless="$(arg simulation)" name="rgb_topic"   default="/camera/rgb/image_rect_color"/>
  <arg     if="$(arg simulation)" name="depth_topic" default="/camera/depth/image_raw"/>
  <arg unless="$(arg simulation)" name="depth_topic" default="/camera/depth_registered/image_raw"/>
  <arg name="camera_info_topic" default="/camera/rgb/camera_info"/>


  <!-- Navigation stuff (move_base) -->
  <include file="$(find turtlebot_navigation)/launch/includes/move_base.launch.xml"/>

  <group ns="rtabmap">
	
    <!-- Use RGBD synchronization -->
    <!-- Here is a general example using a standalone nodelet, 
         but it is recommended to attach this nodelet to nodelet 
         manager of the camera to avoid topic serialization -->
    <node pkg="nodelet" type="nodelet" name="rgbd_sync" args="standalone rtabmap_ros/rgbd_sync" output="screen">
      <!-- input -->
      <remap from="rgb/image"       to="$(arg rgb_topic)"/>
      <remap from="depth/image"     to="$(arg depth_topic)"/>
      <remap from="rgb/camera_info" to="/camera/rgb/camera_info"/>
      <!-- output -->
      <remap from="rgbd_image"      to="rgbd_image"/> 
      
      <!-- Should be true for not synchronized camera topics 
           (e.g., false for kinectv2, zed, realsense, true for xtion, kinect360)-->
      <param name="approx_sync"       value="true"/> 
    </node>

    <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="arg args">
          <param name="frame_id" type="string" value="base_link"/>

          <param name="subscribe_depth" type="bool" value="false"/>   <!-- promjenjeno s false-->
          <param name="subscribe_rgbd" type="bool" value="true"/>
          <param name="subscribe_scan" type="bool" value="true"/>

          <remap from="odom" to="/odom"/>	<!-- promjenjeno da odgovara robotu-->
          <remap from="scan" to="/scan"/>	<!-- promjenjeno da odgovara lidaru-->
          <remap from="rgbd_image" to="rgbd_image"/> <!-- promjenjeno da odgovara kameri bilo rgbd_image na oba mjesta-->

          <param name="queue_size" type="int" value="10"/>

          <!-- RTAB-Map's parameters -->
          <param name="RGBD/NeighborLinkRefining" type="string" value="true"/>
          <param name="RGBD/ProximityBySpace"     type="string" value="true"/>
          <param name="RGBD/AngularUpdate"        type="string" value="0.01"/>
          <param name="RGBD/LinearUpdate"         type="string" value="0.01"/>
          <param name="RGBD/OptimizeFromGraphEnd" type="string" value="false"/>
          <param name="Grid/FromDepth"            type="string" value="true"/> <!-- occupancy grid from lidar    ali sam stavio zbog octomap-a-->
          <param name="Reg/Force3DoF"             type="string" value="true"/>
          <param name="Reg/Strategy"              type="string" value="2"/> <!--bilo 1=ICP    stavio 2(visual+icp) 0=visual-->
          
          <!-- ICP parameters -->
          <param name="Icp/VoxelSize"                 type="string" value="0.05"/>
          <param name="Icp/MaxCorrespondenceDistance" type="string" value="0.1"/>

   	  <remap from="grid_map" to="/map"/>

	  <!-- localization mode -->
	  <param     if="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="false"/>
	  <param unless="$(arg localization)" name="Mem/IncrementalMemory" type="string" value="true"/>
	  <param name="Mem/InitWMWithAllNodes" type="string" value="$(arg localization)"/> 

	  <!-- za visoke i niskre preprijeke -->
	  <param name="Grid/MaxGroundHeight" type="string" value="0.03"/>       <!-- all points below 20 cm are ground -->
  	  <param name="Grid/MaxObstacleHeight" type="string" value="0.45"/>       <!-- all points above 20 cm and below 2 m are obstacles -->
   	  <param name="Grid/NormalsSegmentation" type="string" value="false"/> <!-- Use simple passthrough(false) to filter the ground instead of normals(true) segmentation  dodano da bolje segmentira ground od preprijeka-->

    </node>
    <!-- 
	TF za lameru i lidar:  
	rosrun tf static_transform_publisher 0.1 0.0 0.2 0.0 0.0.0 base_laser_link laser 100  #za terminal ak ose ne zeli u launch datoteci
	turtlebot_bringup/launch/includes/robot.launch.xml #lokacija xml-a za opis robota
    -->
    <node pkg="tf" type="static_transform_publisher" name="link1_broadcaster_lidar" args="0 0 0 0 0 0 1 base_laser_link laser 50" />
    <node pkg="tf" type="static_transform_publisher" name="link2_broadcaster_kamera" args="-0.1 0 0.3 0 0 0 1 base_link camera_link 50" />

    <arg name="wait_for_transform"  default="0.1"/> 
    <!-- brzina transformacije 0.2 defoltna
      robot_state_publisher's publishing frequency in "turtlebot_bringup/launch/includes/robot.launch.xml" 
      can be increase from 5 to 10 Hz to avoid some TF warnings.
    -->
  </group>

</launch>
Reply | Threaded
Open this post in threaded view
|

Re: RGBD Camera and 2D Lidar: Is there any way to merge scans?

matlabbe
Administrator
Hi,

You may use the lidar for the occupancy grid map (global planning in move_base), and feed the camera cloud to local cosmtap parameters. The local costmap can contain both obstacles from lidar and the camera (see obstacles_detection nodelet to segment the ground and obstacles from the camera). See for example:
https://github.com/introlab/rtabmap_ros/blob/master/launch/azimut3/config/local_costmap_params_2d.yaml

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: RGBD Camera and 2D Lidar: Is there any way to merge scans?

Davor96
Hi,

thank you Mathieu for your response. I managed to resolve the problem by implementing code from the two links you send me. 

In costmap_common_params.yaml(from turtlebot_navigation->params) I added "point_cloud_sensorA" and "point_cloud_sensorB"
to "observation_sources: scan bump" and the description of the two new sensors below(just changed "sensor_frame:" to "camera_link").

And in my launch file, I added "data_throttle", "points_xyz_planner" and "obstacles_detection" as standalone nodelets
(remapped "frame_id" to "camera_link" in "obstacles_detection" node and remapped inputs from camera in "data_throttle" node).

Thank you for taking your time to help me.

Sincerely,
Davor