Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

classic Classic list List threaded Threaded
20 messages Options
Reply | Threaded
Open this post in threaded view
|

Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
I plan on using a drone to perform mapping with 4 sets of stereo cameras (mounted 90 degrees from each other).

Since the mapping is done outdoor, I have access to GPS for positioning. I currently have a nodes that publishes the drone's position, orientation, and linear and angular velocities (based on GPS and IMU).

RTAB Map subscribes to odom topic which consist of:

geometry_msgs/PoseWithCovariance (position and orientation)
geometry_msgs/TwistWithCovariance (linear and angular velocities)

I can publish an odom topic using existing nodes, but I'm not sure what values I should use for the covariance (what is a good place holder value?).

I noticed there is a demo using 2 Kinect sensors. I'm assuming I can re-configure it to work with 4 set of stereo cameras correct?


I don't think I necessarily need localization as I have access to GPS. I want to keep processing load as low as possible for faster update frequency (faster response), so is it possible to use RTAB-Map for only mapping?


Anyways, my goals are 3D mapping (octree/point cloud), navigation (path planning), obstacle avoidance, and exploration. Will RTAB-Map be a good fit for implementing my goals?

Thanks
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
Hi,

It is possible to disable loop closure detection in rtabmap, so that it accumulates only data and constructs a point cloud. The most efficient way to disable loop closure detection is to set the parameter "Kp/MaxFeatures" to -1:
<node pkg="rtabmap_ros" type="rtabmap" name="rtabmap">
    ...
    <param name="Kp/MaxFeatures" type="string" value="-1"/>
</node>

To accumulate from four 4 cameras, you will need to
  1- compute a disparity image for each camera (stereo_img_proc),
  2- convert it to depth (rtabmap_ros/disparity_to_depth),
  3- then create a new callback in rtabmap node (CoreWrapper.cpp) similar to depth2Callback() but with two more cameras:
void CoreWrapper::depth4Callback(
		const nav_msgs::OdometryConstPtr & odomMsg,
		const sensor_msgs::ImageConstPtr& image1Msg,
		const sensor_msgs::ImageConstPtr& depth1Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo1Msg,
		const sensor_msgs::ImageConstPtr& image2Msg,
		const sensor_msgs::ImageConstPtr& depth2Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo2Msg,
		const sensor_msgs::ImageConstPtr& image3Msg,
		const sensor_msgs::ImageConstPtr& depth3Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo3Msg,
		const sensor_msgs::ImageConstPtr& image4Msg,
		const sensor_msgs::ImageConstPtr& depth4Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo4Msg)
{
...
}
where image topics are the left images, the depth topics are the disparity images (converted to depth) and camera info topics are the left camera info topics.

Note also that TF between the cameras must be accurate and the images must have the same size. I'm not sure if the callback from ROS can synchronize as many messages at the same time though. Maybe a node pre-synchronizing the topics into a list could be done, then calling the callback with this list.

RTAB-Map will give you a 3D point cloud, that can be used as input to an octomap. Note that computing 4 disparity images may be computensively expensive. I'm not sure if IMU/GPS alone would be enough for good mapping, rtabmap_ros/rgbd_odometry (visual odometry) could also be done with the inputs of the cameras (using a similar callback than above for rgbd_odometry node) but with a significant computation load.

cheers


Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
Thanks,

The cameras are grey scale and the resolution is only 320x240, so the total resolution is only 640x480 (well it will have to do a different tf for each camera).

For processing power I have access to nvidia JTX1 or intel i5-6260U (need to explorer the speed up of using CUDA for generating depth maps, so I'll probably just implement by own node with opencv for generating depth maps).

The goal is not to generate a detailed map. I mainly need a map to determine areas needed for exploration, medium range navigation, and possibly obstacle avoidance (need to figure out if I'm going with reactive obstacle avoidance or using map navigation (will require loop closure to combat drift and should inherently avoid obstacles)).

I'm hoping to achieve at least 5fps if it is possible.

Thanks

Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
In reply to this post by matlabbe
I went ahead and implemented callback to processes 4 depth images.

void CoreWrapper::depth4Callback(
		const nav_msgs::OdometryConstPtr & odomMsg,
		const sensor_msgs::ImageConstPtr& image1Msg,
		const sensor_msgs::ImageConstPtr& depth1Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo1Msg,
		const sensor_msgs::ImageConstPtr& image2Msg,
		const sensor_msgs::ImageConstPtr& depth2Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo2Msg)
		const sensor_msgs::ImageConstPtr& image3Msg,
		const sensor_msgs::ImageConstPtr& depth3Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo3Msg,
		const sensor_msgs::ImageConstPtr& image4Msg,
		const sensor_msgs::ImageConstPtr& depth4Msg,
		const sensor_msgs::CameraInfoConstPtr& cameraInfo4Msg)

{
	if(!commonOdomUpdate(odomMsg))
	{
		return;
	}

	std::vector<sensor_msgs::ImageConstPtr> imageMsgs;
	std::vector<sensor_msgs::ImageConstPtr> depthMsgs;
	std::vector<sensor_msgs::CameraInfoConstPtr> cameraInfoMsgs;
	imageMsgs.push_back(image1Msg);
	imageMsgs.push_back(image2Msg);
	imageMsgs.push_back(image3Msg);
	imageMsgs.push_back(image4Msg);
	depthMsgs.push_back(depth1Msg);
	depthMsgs.push_back(depth2Msg);
	depthMsgs.push_back(depth3Msg);
	depthMsgs.push_back(depth4Msg);
	cameraInfoMsgs.push_back(cameraInfo1Msg);
	cameraInfoMsgs.push_back(cameraInfo2Msg);
	cameraInfoMsgs.push_back(cameraInfo3Msg);
	cameraInfoMsgs.push_back(cameraInfo4Msg);

	sensor_msgs::LaserScanConstPtr scanMsg; // Null
	sensor_msgs::PointCloud2ConstPtr scan3dMsg; // Null
	commonDepthCallback(odomMsg->header.frame_id, imageMsgs, depthMsgs, cameraInfoMsgs, scanMsg, scan3dMsg);
}


I'm still kind of new to ROS and RTAB-Map, so I'm trying to figure out how create a launch file that works with my sensor data.

These are the primarily topics I have being published from my robot.


/camera1/left/image_rect
/camera1/left/camera_info
/camera1/right/image_rect
/camera1/right/camera_info
/camera1/depth_image

/camera2/left/image_rect
/camera2/left/camera_info
/camera2/right/image_rect
/camera2/right/camera_info
/camera2/depth_image

/camera3/left/image_rect
/camera3/left/camera_info
/camera3/right/image_rect
/camera3/right/camera_info
/camera3/depth_image

/camera4/left/image_rect
/camera4/left/camera_info
/camera4/right/image_rect
/camera4/right/camera_info
/camera4/depth_image

/odom


Anyways, are you suggesting that I treat my stereo cameras as RGBD cameras to feed into rtabmap? Basically instead of publishing both left and right images. I should just publish the left image as RGB (greyscaled) and Depth (generated from stereo images). Thus my topics should look like this?


/camera1/rgb/rgb/image_rect_color
/camera1/rgb/camera_info
/camera1/depth_registered/image_raw

/camera2/rgb/rgb/image_rect_color
/camera2/rgb/camera_info
/camera2/depth_registered/image_raw

/camera3/rgb/rgb/image_rect_color
/camera3/rgb/camera_info
/camera3/depth_registered/image_raw

/camera4/rgb/rgb/image_rect_color
/camera4/rgb/camera_info
/camera4/depth_registered/image_raw

/odom


As for the launch file I'm basing off the 2 Kinetic demo

<launch>
   
  <!-- Multi-cameras demo  with 4 Kinects -->
   
  <!-- Cameras -->
  <include file="$(find freenect_launch)/launch/freenect.launch">
    <arg name="depth_registration" value="True" />
    <arg name="camera" value="camera1" />
    <arg name="device_id" value="#1" />
  </include>
  <include file="$(find freenect_launch)/launch/freenect.launch">
    <arg name="depth_registration" value="True" />
    <arg name="camera" value="camera2" />
    <arg name="device_id" value="#2" />
  </include>

  <!-- Frames: Kinects are placed at 90 degrees -->
  <node pkg="tf" type="static_transform_publisher" name="base_to_camera1_tf"
      args="0.0 0.0 0.0 0.0 0.0 0.0 /base_link /camera1_link 100" />
  <node pkg="tf" type="static_transform_publisher" name="base_to_camera2_tf"
      args="-0.1325 -0.1975 0.0 -1.570796327 0.0 0.0 /base_link /camera2_link 100" />

  <node pkg="tf" type="static_transform_publisher" name="base_to_camera3_tf"
      args="#### #### #### #### 0.0 0.0 /base_link /camera3_link 100" />
  <node pkg="tf" type="static_transform_publisher" name="base_to_camera4_tf"
      args="#### #### #### #### 0.0 0.0 /base_link /camera4_link 100" />
   

   <!-- Choose visualization -->
   <arg name="rviz"       default="false" />
   <arg name="rtabmapviz" default="true" /> 
   
   <!-- ODOMETRY MAIN ARGUMENTS: 
        -"strategy"        : Strategy: 0=Frame-to-Map 1=Frame-to-Frame
        -"feature"         : Feature type: 0=SURF 1=SIFT 2=ORB 3=FAST/FREAK 4=FAST/BRIEF 5=GFTT/FREAK 6=GFTT/BRIEF 7=BRISK
        -"nn"              : Nearest neighbor strategy : 0=Linear, 1=FLANN_KDTREE, 2=FLANN_LSH, 3=BRUTEFORCE 
                             Set to 1 for float descriptor like SIFT/SURF                  
                             Set to 3 for binary descriptor like ORB/FREAK/BRIEF/BRISK  
        -"max_depth"       : Maximum features depth (m)  
        -"min_inliers"     : Minimum visual correspondences to accept a transformation (m)  
        -"inlier_distance" : RANSAC maximum inliers distance (m)  
        -"local_map"       : Local map size: number of unique features to keep track 
        -"odom_info_data"  : Fill odometry info messages with inliers/outliers data.
    -->
   <arg name="strategy"        default="0" />
   <arg name="feature"         default="6" />
   <arg name="nn"              default="3" />
   <arg name="max_depth"       default="4.0" />
   <arg name="min_inliers"     default="20" />
   <arg name="inlier_distance" default="0.02" />
   <arg name="local_map"       default="1000" />
   <arg name="odom_info_data"  default="true" />
   <arg name="wait_for_transform"  default="true" />
        
  <group ns="rtabmap">
  
    <!-- Odometry -->
    <node pkg="rtabmap_ros" type="rgbd_odometry" name="rgbd_odometry" output="screen">
      <remap from="rgb0/image"       to="/camera1/rgb/image_rect_color"/>
      <remap from="depth0/image"     to="/camera1/depth_registered/image_raw"/>
      <remap from="rgb0/camera_info" to="/camera1/rgb/camera_info"/>
      
      <remap from="rgb1/image"       to="/camera2/rgb/image_rect_color"/>
      <remap from="depth1/image"     to="/camera2/depth_registered/image_raw"/>
      <remap from="rgb1/camera_info" to="/camera2/rgb/camera_info"/>
	  
      <remap from="rgb2/image"       to="/camera3/rgb/image_rect_color"/>
      <remap from="depth2/image"     to="/camera3/depth_registered/image_raw"/>
      <remap from="rgb2/camera_info" to="/camera3/rgb/camera_info"/>

      <remap from="rgb3/image"       to="/camera4/rgb/image_rect_color"/>
      <remap from="depth3/image"     to="/camera4/depth_registered/image_raw"/>
      <remap from="rgb3/camera_info" to="/camera4/rgb/camera_info"/>



	  <param name="frame_id"                 type="string" value="base_link"/>
	  <param name="depth_cameras"            type="int"    value="2"/>
	  <param name="wait_for_transform"       type="bool"   value="$(arg wait_for_transform)"/>
	  <param name="Odom/Strategy"            type="string" value="$(arg strategy)"/> 
	  <param name="Vis/FeatureType"         type="string" value="$(arg feature)"/>  
	  <param name="Vis/CorNNType"           type="string" value="$(arg nn)"/>
	  <param name="Vis/MaxDepth"            type="string" value="$(arg max_depth)"/>  
	  <param name="Vis/MinInliers"          type="string" value="$(arg min_inliers)"/> 
	  <param name="Vis/InlierDistance"      type="string" value="$(arg inlier_distance)"/>       
      <param name="OdomF2M/MaxSize" type="string" value="$(arg local_map)"/> 
      <param name="Odom/FillInfoData"        type="string" value="$(arg odom_info_data)"/>   
    </node>
  
    <!-- Visual SLAM (robot side) -->
    <!-- args: "delete_db_on_start" and "udebug" -->
    <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start">
      <param name="subscribe_depth"  type="bool"   value="true"/>
      <param name="depth_cameras"    type="int"    value="2"/>
	  <param name="frame_id"         type="string" value="base_link"/>
	  <param name="gen_scan"         type="bool"   value="true"/>
	  <param name="wait_for_transform" type="bool"   value="$(arg wait_for_transform)"/>
	
      <remap from="rgb0/image"       to="/camera1/rgb/image_rect_color"/>
      <remap from="depth0/image"     to="/camera1/depth_registered/image_raw"/>
      <remap from="rgb0/camera_info" to="/camera1/rgb/camera_info"/>
     
      <remap from="rgb1/image"       to="/camera2/rgb/image_rect_color"/>
      <remap from="depth1/image"     to="/camera2/depth_registered/image_raw"/>
      <remap from="rgb1/camera_info" to="/camera2/rgb/camera_info"/>

      <remap from="rgb2/image"       to="/camera3/rgb/image_rect_color"/>
      <remap from="depth2/image"     to="/camera3/depth_registered/image_raw"/>
      <remap from="rgb2/camera_info" to="/camera3/rgb/camera_info"/>

      <remap from="rgb3/image"       to="/camera4/rgb/image_rect_color"/>
      <remap from="depth3/image"     to="/camera4/depth_registered/image_raw"/>
      <remap from="rgb3/camera_info" to="/camera4/rgb/camera_info"/>



      <param name="Vis/MinInliers"     type="string" value="10"/>
      <param name="Vis/InlierDistance" type="string" value="$(arg inlier_distance)"/>
    </node>
  
    <!-- Visualisation RTAB-Map -->
    <node if="$(arg rtabmapviz)" pkg="rtabmap_ros" type="rtabmapviz" name="rtabmapviz" args="-d $(find rtabmap_ros)/launch/config/rgbd_gui.ini" output="screen">
      <param name="subscribe_depth"     type="bool"   value="true"/>
      <param name="subscribe_odom_info" type="bool"   value="$(arg odom_info_data)"/>
      <param name="frame_id"            type="string" value="base_link"/>
      <param name="depth_cameras"       type="int"    value="2"/>
      <param name="wait_for_transform"  type="bool"   value="$(arg wait_for_transform)"/>
    
      <remap from="rgb0/image"       to="/camera1/rgb/image_rect_color"/>
      <remap from="depth0/image"     to="/camera1/depth_registered/image_raw"/>
      <remap from="rgb0/camera_info" to="/camera1/rgb/camera_info"/>
     
      <remap from="rgb1/image"       to="/camera2/rgb/image_rect_color"/>
      <remap from="depth1/image"     to="/camera2/depth_registered/image_raw"/>
      <remap from="rgb1/camera_info" to="/camera2/rgb/camera_info"/>
    </node>
  
  </group>
  
  <!-- Visualization RVIZ -->
  <node if="$(arg rviz)" pkg="rviz" type="rviz" name="rviz" args="-d $(find rtabmap_ros)/launch/config/rgbd.rviz"/>
  <!-- sync cloud with odometry and voxelize the point cloud (for fast visualization in rviz) -->
  <node if="$(arg rviz)" pkg="nodelet" type="nodelet" name="standalone_nodelet"  args="manager" output="screen"/>
  <node if="$(arg rviz)" pkg="nodelet" type="nodelet" name="data_odom_sync" args="load rtabmap_ros/data_odom_sync standalone_nodelet">
    <remap from="rgb/image_in"       to="camera1/rgb/image_rect_color"/>
    <remap from="depth/image_in"     to="camera1/depth_registered/image_raw"/>
    <remap from="rgb/camera_info_in" to="camera1/rgb/camera_info"/>
    <remap from="odom_in"            to="rtabmap/odom"/>
    
    <remap from="rgb/image_out"       to="data_odom_sync/image"/>
    <remap from="depth/image_out"     to="data_odom_sync/depth"/>
    <remap from="rgb/camera_info_out" to="data_odom_sync/camera_info"/>
    <remap from="odom_out"            to="odom_sync"/>
  </node>
  <node if="$(arg rviz)" pkg="nodelet" type="nodelet" name="points_xyzrgb" args="load rtabmap_ros/point_cloud_xyzrgb standalone_nodelet">
    <remap from="rgb/image"       to="data_odom_sync/image"/>
    <remap from="depth/image"     to="data_odom_sync/depth"/>
    <remap from="rgb/camera_info" to="data_odom_sync/camera_info"/>
    <remap from="cloud"           to="voxel_cloud" />

    <param name="voxel_size" type="double" value="0.01"/>
  </node>

</launch>

Anyways, the remap topic names I can probably avoid in the launch file since I can just have them setup correctly to interface with RTAB-Map. I'll also need to fill in the static TF for cameras are pointed 90 degrees from each other along with small offsets.

Anything else I'm missing that would help me get things up and running?





Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
Hi,

It is a good start. For the topics already published by your cameras, you may use directly:
/cameraX/left/image_rect 
/cameraX/left/camera_info 
/cameraX/depth_image 

in

<remap from="rgb0/image"       to="/camera1/left/image_rect"/>
<remap from="rgb0/camera_info" to="/camera1/left/camera_info"/>
<remap from="depth0/image"     to="/camera1/depth_image"/>
...

Not sure you already did it, but you also need to setup the callbacks here (rtabmap), here (rgbd_odometry) and here (rtabmapviz). However, I am not sure it is possible to create a callback with as many topics. It seems that the message_filters::Synchronizer API has a maximum of 9 topics that can be synchronized. You may have to modify message_filters package to create a 13 topics interface (4*3 (rgb+camera_info+depth) + 1 (odom)) for the rtabmap's callback and 12 topics for the rgbd_odometry's callback.

Another option: as you are using stereo cameras, the stamps of the rgb+depth+camera_info topics should be exactly the same between them, so an ExactTime policy could be used to merge these three topics into a single one for each camera before connecting them to rtabmap or rgbd_odometry. You will then have 4 topics for 4 cameras (with different timestamps). Example of such a message:
# my_sync_pkg::Camera
std_msgs/Header header
sensor_msgs/Image color
sensor_msgs/Image depth
sensor_msgs/camera_info camera_info
 
Callback to synchronize three msgs from the same camera:
// using an ExactTime policy
void SyncNode::callback(
		const sensor_msgs::ImageConstPtr & color,
		const sensor_msgs::ImageConstPtr & depth,
		const sensor_msgs::CameraInfoConstPtr & cameraInfo)
{
	my_sync_pkg::Camera msg;
	msg.header = color->header;
	msg.color = *color;
	msg.depth = *depth;
	msg.camera_info = *cameraInfo;
	pub_.publish(msg);
}

Then you would have a callback like this in rtabmap:
// using an ApproximateTime policy
void CoreWrapper::depth4Callback(
		const nav_msgs::OdometryConstPtr & odomMsg,
		const my_sync_pkg::CameraConstPtr& camera1Msg,
		const my_sync_pkg::CameraConstPtr& camera2Msg,
		const my_sync_pkg::CameraConstPtr& camera3Msg,
		const my_sync_pkg::CameraConstPtr& camera4Msg
)

{
	if(!commonOdomUpdate(odomMsg))
	{
		return;
	}

	std::vector<const sensor_msgs::Image*> imageMsgs;
	std::vector<const sensor_msgs::Image*> depthMsgs;
	std::vector<const sensor_msgs::CameraInfo*> cameraInfoMsgs;
	imageMsgs.push_back(&camera1Msg->color);
	imageMsgs.push_back(&camera2Msg->color);
	imageMsgs.push_back(&camera3Msg->color);
	imageMsgs.push_back(&camera4Msg->color);
	depthMsgs.push_back(&camera1Msg->depth);
	depthMsgs.push_back(&camera2Msg->depth);
	depthMsgs.push_back(&camera3Msg->depth);
	depthMsgs.push_back(&camera4Msg->depth);
	cameraInfoMsgs.push_back(&camera1Msg->camera_info);
	cameraInfoMsgs.push_back(&camera2Msg->camera_info);
	cameraInfoMsgs.push_back(&camera3Msg->camera_info);
	cameraInfoMsgs.push_back(&camera4Msg->camera_info);

	sensor_msgs::LaserScanConstPtr scanMsg; // Null
	sensor_msgs::PointCloud2ConstPtr scan3dMsg; // Null

	// you have have to change the interface of commonDepthCallback() to 
	// use vectors of raw pointers
	commonDepthCallback(odomMsg->header.frame_id, imageMsgs, depthMsgs, cameraInfoMsgs, scanMsg, scan3dMsg);
}

cheers
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
Instead of using stereo_image_proc, I'm generating the disparity maps with opencv directly.

What units does rtab-map expect for depth image?

Right now I'm generating a 16bit disparity map. Depth Z = B*f/disparity where B is the distance between the cameras and f is the focal length, correct?

Anyways, the stereo imaging device actually generates all 4 stereo sets at the same time. Thus I can just publish all images and depth image into one topic. There will be a tf associated with the center of the 4 stereo camera  set, but I'll need to provide further transformation for each stereo pair. Where can I insert code to manually transform the stereo sets (basically 90 degree yaw rotation with the device's tf as the reference frame).

Anyways, my msg will look something like this

# my_sync_pkg::Camera
std_msgs/Header header
sensor_msgs/Image color1
sensor_msgs/Image depth1
sensor_msgs/camera_info camera_info1
sensor_msgs/Image color2
sensor_msgs/Image depth2
sensor_msgs/camera_info camera_info2
sensor_msgs/Image color3
sensor_msgs/Image depth3
sensor_msgs/camera_info camera_info3
sensor_msgs/Image color4
sensor_msgs/Image depth4
sensor_msgs/camera_info camera_info4

The callback function will look somethings like this

void CoreWrapper::depth4Callback(
		const nav_msgs::OdometryConstPtr & odomMsg,
		const my_sync_pkg::CameraConstPtr& cameraMsg)
{
	if(!commonOdomUpdate(odomMsg))
	{
		return;
	}

	std::vector<const sensor_msgs::Image*> imageMsgs;
	std::vector<const sensor_msgs::Image*> depthMsgs;
	std::vector<const sensor_msgs::CameraInfo*> cameraInfoMsgs;
	imageMsgs.push_back(&cameraMsg->color1);
	imageMsgs.push_back(&cameraMsg->color2);
	imageMsgs.push_back(&cameraMsg->color3);
	imageMsgs.push_back(&cameraMsg->color4);
	depthMsgs.push_back(&cameraMsg->depth1);
	depthMsgs.push_back(&cameraMsg->depth2);
	depthMsgs.push_back(&cameraMsg->depth3);
	depthMsgs.push_back(&cameraMsg->depth4);
	cameraInfoMsgs.push_back(&cameraMsg->camera_info1);
	cameraInfoMsgs.push_back(&cameraMsg->camera_info2);
	cameraInfoMsgs.push_back(&cameraMsg->camera_info3);
	cameraInfoMsgs.push_back(&cameraMsg->camera_info4);

	sensor_msgs::LaserScanConstPtr scanMsg; // Null
	sensor_msgs::PointCloud2ConstPtr scan3dMsg; // Null

	// you have have to change the interface of commonDepthCallback() to 
	// use vectors of raw pointers
	commonDepthCallback(odomMsg->header.frame_id, imageMsgs, depthMsgs, cameraInfoMsgs, scanMsg, scan3dMsg);
}

That or do you think it is better I just split up the stereo camera sets as you originally suggested (with 4 topics, 1 for each stereo set) and have a tf deal with it?
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
Hi,

you can keep all your topics in the same message. For TF, your can create 4 static_transform_publisher for convenience in a launch file.

To convert disparity (computed by OpenCV) to depth, I use this rtabmap::util2d::depthFromDisparity() method. Units of the resulting depth image should be:
- CV_16UC1: mm
- CV_32FC1: m

cheers
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
I'm running into issues compiling rtabmap from source. Specifically, compiling rtabmap throws errors from gtsam.

The errors are come from SymmetricBlockMatrix.h, SymmetricBlockMatrixBlockExpr.h, and HessianFactor.h from gtsam.

I had no issues building gtsam from source, but I did get an failed test case for make check (It was test 9 - testSpirit that failed).
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
Are you building GTSAM 4?
$ git clone https://bitbucket.org/gtborg/gtsam.git

The 3.2.1 won't work.

Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
I did download the current source with
$ git clone https://bitbucket.org/gtborg/gtsam.git

The GSTAM compiled fine (llibgtsam.so.4.0.0, libgtsam.so.4, libgtsam.so) and I installed it with
sudo make install

Anyways here is the full error message from trying to compile rtabmap.

In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h: In instantiation of ‘class gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:130:62:   required from here
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:62:59: error: no type named ‘Index’ in ‘struct Eigen::internal::traits<gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix> >’
     typedef typename Eigen::internal::traits<This>::Index Index;
                                                           ^
In file included from /usr/local/include/gtsam/linear/HessianFactor.h:23:0,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘gtsam::SymmetricBlockMatrix::Block gtsam::SymmetricBlockMatrix::operator()(gtsam::DenseIndex, gtsam::DenseIndex)’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:131:43: error: no matching function for call to ‘gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(gtsam::SymmetricBlockMatrix&, gtsam::DenseIndex&, gtsam::DenseIndex&)’
       return Block(*this, i_block, j_block);
                                           ^
In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note: candidate: constexpr gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>&)
   class SymmetricBlockMatrixBlockExpr : public Eigen::EigenBase<SymmetricBlockMatrixBlockExpr<SymmetricBlockMatrixType> >
         ^
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note:   candidate expects 1 argument, 3 provided
In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h: In instantiation of ‘class gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:136:73:   required from here
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:62:59: error: no type named ‘Index’ in ‘struct Eigen::internal::traits<gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix> >’
     typedef typename Eigen::internal::traits<This>::Index Index;
                                                           ^
In file included from /usr/local/include/gtsam/linear/HessianFactor.h:23:0,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘gtsam::SymmetricBlockMatrix::constBlock gtsam::SymmetricBlockMatrix::operator()(gtsam::DenseIndex, gtsam::DenseIndex) const’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:137:48: error: no matching function for call to ‘gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrix&, gtsam::DenseIndex&, gtsam::DenseIndex&)’
       return constBlock(*this, i_block, j_block);
                                                ^
In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note: candidate: constexpr gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>&)
   class SymmetricBlockMatrixBlockExpr : public Eigen::EigenBase<SymmetricBlockMatrixBlockExpr<SymmetricBlockMatrixType> >
         ^
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note:   candidate expects 1 argument, 3 provided
In file included from /usr/local/include/gtsam/linear/HessianFactor.h:23:0,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘gtsam::SymmetricBlockMatrix::Block gtsam::SymmetricBlockMatrix::range(gtsam::DenseIndex, gtsam::DenseIndex, gtsam::DenseIndex, gtsam::DenseIndex)’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:146:107: error: no matching function for call to ‘gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(gtsam::SymmetricBlockMatrix&, gtsam::DenseIndex&, gtsam::DenseIndex&, gtsam::DenseIndex, gtsam::DenseIndex)’
       return Block(*this, i_startBlock, j_startBlock, i_endBlock - i_startBlock, j_endBlock - j_startBlock);
                                                                                                           ^
In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note: candidate: constexpr gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>&)
   class SymmetricBlockMatrixBlockExpr : public Eigen::EigenBase<SymmetricBlockMatrixBlockExpr<SymmetricBlockMatrixType> >
         ^
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note:   candidate expects 1 argument, 5 provided
In file included from /usr/local/include/gtsam/linear/HessianFactor.h:23:0,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘gtsam::SymmetricBlockMatrix::constBlock gtsam::SymmetricBlockMatrix::range(gtsam::DenseIndex, gtsam::DenseIndex, gtsam::DenseIndex, gtsam::DenseIndex) const’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:155:112: error: no matching function for call to ‘gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrix&, gtsam::DenseIndex&, gtsam::DenseIndex&, gtsam::DenseIndex, gtsam::DenseIndex)’
       return constBlock(*this, i_startBlock, j_startBlock, i_endBlock - i_startBlock, j_endBlock - j_startBlock);
                                                                                                                ^
In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note: candidate: constexpr gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>&)
   class SymmetricBlockMatrixBlockExpr : public Eigen::EigenBase<SymmetricBlockMatrixBlockExpr<SymmetricBlockMatrixType> >
         ^
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note:   candidate expects 1 argument, 5 provided
In file included from /usr/local/include/gtsam/linear/HessianFactor.h:23:0,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘gtsam::SymmetricBlockMatrix::Block gtsam::SymmetricBlockMatrix::full()’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:161:42: error: no matching function for call to ‘gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(gtsam::SymmetricBlockMatrix&, int, gtsam::DenseIndex, int)’
       return Block(*this, 0, nBlocks(), 0);
                                          ^
In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note: candidate: constexpr gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>&)
   class SymmetricBlockMatrixBlockExpr : public Eigen::EigenBase<SymmetricBlockMatrixBlockExpr<SymmetricBlockMatrixType> >
         ^
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note:   candidate expects 1 argument, 4 provided
In file included from /usr/local/include/gtsam/linear/HessianFactor.h:23:0,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘gtsam::SymmetricBlockMatrix::constBlock gtsam::SymmetricBlockMatrix::full() const’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:167:47: error: no matching function for call to ‘gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrix&, int, gtsam::DenseIndex, int)’
       return constBlock(*this, 0, nBlocks(), 0);
                                               ^
In file included from /usr/local/include/gtsam/base/SymmetricBlockMatrix.h:22:0,
                 from /usr/local/include/gtsam/linear/HessianFactor.h:23,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note: candidate: constexpr gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>::SymmetricBlockMatrixBlockExpr(const gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>&)
   class SymmetricBlockMatrixBlockExpr : public Eigen::EigenBase<SymmetricBlockMatrixBlockExpr<SymmetricBlockMatrixType> >
         ^
/usr/local/include/gtsam/base/SymmetricBlockMatrixBlockExpr.h:46:9: note:   candidate expects 1 argument, 4 provided
In file included from /usr/local/include/gtsam/linear/HessianFactor.h:23:0,
                 from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘Eigen::SelfAdjointView<const Eigen::Matrix<double, -1, -1>, 2u> gtsam::SymmetricBlockMatrix::matrix() const’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:173:14: error: could not convert ‘((const gtsam::SymmetricBlockMatrix*)this)->gtsam::SymmetricBlockMatrix::matrix_’ from ‘const Matrix {aka const Eigen::Matrix<double, -1, -1>}’ to ‘Eigen::SelfAdjointView<const Eigen::Matrix<double, -1, -1>, 2u>’
       return matrix_;
              ^
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h: In member function ‘Eigen::SelfAdjointView<Eigen::Matrix<double, -1, -1>, 2u> gtsam::SymmetricBlockMatrix::matrix()’:
/usr/local/include/gtsam/base/SymmetricBlockMatrix.h:179:14: error: could not convert ‘((gtsam::SymmetricBlockMatrix*)this)->gtsam::SymmetricBlockMatrix::matrix_’ from ‘gtsam::Matrix {aka Eigen::Matrix<double, -1, -1>}’ to ‘Eigen::SelfAdjointView<Eigen::Matrix<double, -1, -1>, 2u>’
       return matrix_;
              ^
In file included from /usr/local/include/gtsam/linear/GaussianFactorGraph.h:28:0,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizerParams.h:24,
                 from /usr/local/include/gtsam/nonlinear/NonlinearOptimizer.h:22,
                 from /usr/local/include/gtsam/nonlinear/GaussNewtonOptimizer.h:21,
                 from /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp:46:
/usr/local/include/gtsam/linear/HessianFactor.h: In member function ‘double gtsam::HessianFactor::constantTerm() const’:
/usr/local/include/gtsam/linear/HessianFactor.h:263:79: error: no match for call to ‘(gtsam::SymmetricBlockMatrix::constBlock {aka gtsam::SymmetricBlockMatrixBlockExpr<const gtsam::SymmetricBlockMatrix>}) (int, int)’
     double constantTerm() const { return info_(this->size(), this->size())(0,0); }
                                                                               ^
/usr/local/include/gtsam/linear/HessianFactor.h: In member function ‘double& gtsam::HessianFactor::constantTerm()’:
/usr/local/include/gtsam/linear/HessianFactor.h:268:74: error: no match for call to ‘(gtsam::SymmetricBlockMatrix::Block {aka gtsam::SymmetricBlockMatrixBlockExpr<gtsam::SymmetricBlockMatrix>}) (int, int)’
     double& constantTerm() { return info_(this->size(), this->size())(0,0); }
                                                                          ^
corelib/src/CMakeFiles/rtabmap_core.dir/build.make:1003: recipe for target 'corelib/src/CMakeFiles/rtabmap_core.dir/OptimizerGTSAM.cpp.o' failed
make[2]: *** [corelib/src/CMakeFiles/rtabmap_core.dir/OptimizerGTSAM.cpp.o] Error 1
CMakeFiles/Makefile2:267: recipe for target 'corelib/src/CMakeFiles/rtabmap_core.dir/all' failed
make[1]: *** [corelib/src/CMakeFiles/rtabmap_core.dir/all] Error 2
Makefile:149: recipe for target 'all' failed
make: *** [all] Error 2

I'm running Ubuntu 16.04 with Opencv 3.1.0 from source and ROS Kinetic Kame.
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
Hi,

Just tried on Ubuntu 14.04 and no problems with latest GTSAM source code. Can you show "make VERBOSE=TRUE" to see if the c++11 flag is set?

cheers
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
Looks like c++11 flag was set (it is at the very very far end of the last line).

ubuntu@nuc:~/rtabmap/build$ make VERBOSE=TRUE
/usr/bin/cmake -H/home/ubuntu/rtabmap -B/home/ubuntu/rtabmap/build --check-build-system CMakeFiles/Makefile.cmake 0
/usr/bin/cmake -E cmake_progress_start /home/ubuntu/rtabmap/build/CMakeFiles /home/ubuntu/rtabmap/build/CMakeFiles/progress.marks
make -f CMakeFiles/Makefile2 all
make[1]: Entering directory '/home/ubuntu/rtabmap/build'
make -f utilite/src/CMakeFiles/rtabmap_utilite.dir/build.make utilite/src/CMakeFiles/rtabmap_utilite.dir/depend
make[2]: Entering directory '/home/ubuntu/rtabmap/build'
cd /home/ubuntu/rtabmap/build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/ubuntu/rtabmap /home/ubuntu/rtabmap/utilite/src /home/ubuntu/rtabmap/build /home/ubuntu/rtabmap/build/utilite/src /home/ubuntu/rtabmap/build/utilite/src/CMakeFiles/rtabmap_utilite.dir/DependInfo.cmake --color=
make[2]: Leaving directory '/home/ubuntu/rtabmap/build'
make -f utilite/src/CMakeFiles/rtabmap_utilite.dir/build.make utilite/src/CMakeFiles/rtabmap_utilite.dir/build
make[2]: Entering directory '/home/ubuntu/rtabmap/build'
make[2]: Nothing to be done for 'utilite/src/CMakeFiles/rtabmap_utilite.dir/build'.
make[2]: Leaving directory '/home/ubuntu/rtabmap/build'
[  6%] Built target rtabmap_utilite
make -f utilite/resource_generator/CMakeFiles/res_tool.dir/build.make utilite/resource_generator/CMakeFiles/res_tool.dir/depend
make[2]: Entering directory '/home/ubuntu/rtabmap/build'
cd /home/ubuntu/rtabmap/build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/ubuntu/rtabmap /home/ubuntu/rtabmap/utilite/resource_generator /home/ubuntu/rtabmap/build /home/ubuntu/rtabmap/build/utilite/resource_generator /home/ubuntu/rtabmap/build/utilite/resource_generator/CMakeFiles/res_tool.dir/DependInfo.cmake --color=
make[2]: Leaving directory '/home/ubuntu/rtabmap/build'
make -f utilite/resource_generator/CMakeFiles/res_tool.dir/build.make utilite/resource_generator/CMakeFiles/res_tool.dir/build
make[2]: Entering directory '/home/ubuntu/rtabmap/build'
make[2]: Nothing to be done for 'utilite/resource_generator/CMakeFiles/res_tool.dir/build'.
make[2]: Leaving directory '/home/ubuntu/rtabmap/build'
[  7%] Built target res_tool
make -f corelib/src/CMakeFiles/rtabmap_core.dir/build.make corelib/src/CMakeFiles/rtabmap_core.dir/depend
make[2]: Entering directory '/home/ubuntu/rtabmap/build'
cd /home/ubuntu/rtabmap/build && /usr/bin/cmake -E cmake_depends "Unix Makefiles" /home/ubuntu/rtabmap /home/ubuntu/rtabmap/corelib/src /home/ubuntu/rtabmap/build /home/ubuntu/rtabmap/build/corelib/src /home/ubuntu/rtabmap/build/corelib/src/CMakeFiles/rtabmap_core.dir/DependInfo.cmake --color=
make[2]: Leaving directory '/home/ubuntu/rtabmap/build'
make -f corelib/src/CMakeFiles/rtabmap_core.dir/build.make corelib/src/CMakeFiles/rtabmap_core.dir/build
make[2]: Entering directory '/home/ubuntu/rtabmap/build'
[  8%] Building CXX object corelib/src/CMakeFiles/rtabmap_core.dir/OptimizerGTSAM.cpp.o
cd /home/ubuntu/rtabmap/build/corelib/src && /usr/bin/c++   -DDISABLE_PCAP -DDISABLE_PNG -DEIGEN_USE_NEW_STDVECTOR -DEIGEN_YES_I_KNOW_SPARSE_MODULE_IS_NOT_STABLE_YET -DFLANN_STATIC -DQT_CORE_LIB -DQT_GUI_LIB -DQT_NO_DEBUG -DQT_WIDGETS_LIB -Dqh_QHpointer -Drtabmap_core_EXPORTS -DvtkFiltersFlowPaths_AUTOINIT="1(vtkFiltersParallelFlowPaths)" -DvtkIOExodus_AUTOINIT="1(vtkIOParallelExodus)" -DvtkIOGeometry_AUTOINIT="1(vtkIOMPIParallel)" -DvtkIOImage_AUTOINIT="1(vtkIOMPIImage)" -DvtkIOSQL_AUTOINIT="2(vtkIOMySQL,vtkIOPostgreSQL)" -DvtkRenderingContext2D_AUTOINIT="1(vtkRenderingContextOpenGL)" -DvtkRenderingCore_AUTOINIT="4(vtkInteractionStyle,vtkRenderingFreeType,vtkRenderingFreeTypeOpenGL,vtkRenderingOpenGL)" -DvtkRenderingFreeType_AUTOINIT="2(vtkRenderingFreeTypeFontConfig,vtkRenderingMatplotlib)" -DvtkRenderingLIC_AUTOINIT="1(vtkRenderingParallelLIC)" -DvtkRenderingVolume_AUTOINIT="1(vtkRenderingVolumeOpenGL)" -I/usr/include/vtk-6.2 -I/usr/include/freetype2 -I/usr/include/x86_64-linux-gnu/freetype2 -I/usr/include/jsoncpp -I/usr/include/hdf5/openmpi -I/usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent -I/usr/lib/openmpi/include/openmpi/opal/mca/event/libevent2021/libevent/include -I/usr/lib/openmpi/include -I/usr/lib/openmpi/include/openmpi -I/usr/include/x86_64-linux-gnu -I/usr/include/tcl -I/usr/include/libxml2 -I/usr/include/python2.7 -I/home/ubuntu/rtabmap/utilite/include -I/home/ubuntu/rtabmap/corelib/src/../include -I/home/ubuntu/rtabmap/corelib/src -I/home/ubuntu/rtabmap/build/corelib/src -isystem /opt/ros/kinetic/include/opencv-3.1.0-dev/opencv -isystem /opt/ros/kinetic/include/opencv-3.1.0-dev -I/usr/include/pcl-1.7 -I/usr/include/eigen3 -I/usr/include/ni -I/usr/include/openni2 -I/usr/local/include -I/usr/include/dc1394 -I/usr/include/suitesparse -isystem /usr/include/x86_64-linux-gnu/qt5 -isystem /usr/include/x86_64-linux-gnu/qt5/QtWidgets -isystem /usr/include/x86_64-linux-gnu/qt5/QtGui -isystem /usr/include/x86_64-linux-gnu/qt5/QtCore -isystem /usr/lib/x86_64-linux-gnu/qt5/mkspecs/linux-g++-64  -fmessage-length=0  -fopenmp -std=c++11 -O3 -DNDEBUG -fPIC   -Wall -Wno-unknown-pragmas -fPIC -o CMakeFiles/rtabmap_core.dir/OptimizerGTSAM.cpp.o -c /home/ubuntu/rtabmap/corelib/src/OptimizerGTSAM.cpp
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
Hi,

I reproduced the error on Ubuntu 16.04. It seems to be related to Eigen version installed (3.3 in 16.04). GTSAM is building with its own Eigen in its repository, it is why it has no compilation errors. I re-ordered the include directories to use Eigen from GTSAM before the system one (see this commit). I also fixed the "GTSAM_INCLUDE_DIRS", which should be "GTSAM_INCLUDE_DIR".

cheers

Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
I just want to report back that I was able compile rtabmap completely using the edits from the commit.
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
I'm current in the process of setting up my tf tree and I just want clarify the coordinate frames for the maps.

The SDK for my robot (UAV) is internally a left handed frame for the ground frame where x --> North, y --> East, and z --> Up (right handed frame would have z --> Down). Thus I had to mess with the SDK, so all topics published are right handed (ROS tf doesn't deal with left handed frames). With the edits, increase height/z-axis is negative and thus the frame is right handed. Thus I need a tf broadcaster to transform the ground frame to map frame. What is the typical map frame axis setup? x --> East, y -- > North, z --> Up?

Also, since the robot is a UAV, quadrotor specifically, it can move equally in both x and y axis (holonomic_robot). I'm guessing I should be using dwa_local_planner instead of base_local_planner correct?
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
Hi,

See REP 103:
For short-range Cartesian representations of geographic locations, use the east north up [5] (ENU) convention:

X east
Y north
Z up
I don't know if there is a holonomic parameter for dwa_local_planner, but there is one for base_local_planner called "holonomic_robot".

cheers
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
I was reviewing the SDK for my drone and the system is mostly setup with the NED convention (part of the sdk has a left handed NED with z-axis pointing up, which I reversed for the published topics, but the API calls are still left handed...). Also, the body frame is that of a typical frame of a plane (principle frame?) with

Principel? frame
X - forward
Y - right
Z - down

NED
X - north
Y - east
Z - down

/odom_ned --> /base_link_principle --> /camera_optical
(this is my current setup and everything is displayed upside down in rviz. I do have a /map --> /odom_ned that does 180 pitch and 90 yaw rotation to make it upright in ENU)

Most examples of ROS robots I see use the ENU convention and the body frame is as follows

Body frame
X - forward
Y - left
Z - up

ENU
X - east
Y - north
Z - up
/odom --> /base_link --> /camera_optical

Will navigation and rtabmap packages work with NED coordinate frame and principle body frame? or should I convert the NED and plane body frame (principle?) into ENU and typical ROS body frame?

Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

matlabbe
Administrator
rtabmap and move_base assume:
Body frame
X - forward
Y - left
Z - up

The world frame is up to your application, though ROS uses ENU.


cheers
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
This post was updated on .
Alright, looks I'll have first have to modify the topics being published to generate frames in ENU and regular ROS body frame before I can implement rtabmap and navigation stack.

For the body frame I need to invert Y and Z axis to convert plane frame to ROS body frame
X - froward                   |  X ->  X
Y - right --(invert)--> left  |  Y -> -Y
Z - down  --(invert)--> up    |  Z -> -Z
What about the rotation axis roll, pitch, and yaw / x, y, z, w?
X ->  X + ????
Y -> -Y + ????
Z -> -Z + ????
W ->  W + ????
// How do you add 90 degree yaw to a quaternion? Multiply a quaternion with just 90 degree yaw?
Roll  ->  Roll  ?
Pitch -> -Pitch ?
Yaw   -> -Yaw + 90 degree  ?

As for the ground/world frame, I need to swap X and Y and invert Z axis to convert NED to ENU
X - north --(swap with Y)--> east   |  X ->  Y
Y - east  --(swap with X)--> north  |  Y ->  X
Z - down  --(  invert   )--> up     |  Z -> -Z
X ->  Y  ?
Y ->  X  ?
Z -> -Z  ?
W ->  W  ?
Roll  ->  Pitch ?
Pitch ->  Roll  ?
Yaw   -> -Yaw   ?

Also for ENU world coordinate frame, where is 0 degree yaw? Pointed East? If 0 degree yaw is pointed east, then I need to modify my NED to ENU conversion for rotation correct? Thus should body frame match world frame at 0 rotation and translation?

X ->  Y + ????  ?
Y ->  X + ???? ?
Z -> -Z + ???? ?
W ->  W + ???? ?
// How do you add 90 degree yaw to a quaternion?
Roll  ->  Pitch ?
Pitch ->  Roll  ?
Yaw   -> -Yaw + 90 degree  ?

Edit: I finish converting the topics generated by the SDK all into ENU coordinate frame from NED coordinate frame. Note ENU has yaw 0 at east and NED has yaw 0 at north.

Here are the conversions from right hand NED to ENU

Body frame:
X - froward                   |  X ->  X
Y - right --(invert)--> left  |  Y -> -Y
Z - down  --(invert)--> up    |  Z -> -Z
X ->  X 
Y -> -Y 
Z -> -Z 
W ->  W

World frame: (note you have to rotate by 90 in yaw axis so 0 yaw is at east and not north)
X - north --(swap with Y)--> east   |  X ->  Y
Y - east  --(swap with X)--> north  |  Y ->  X
Z - down  --(  invert   )--> up     |  Z -> -Z
             (90 degree yaw turn quat)
X ->  Y      0.0
Y ->  X  \/  0.0
Z -> -Z  /\  0.7071067811865476
W ->  W      0.7071067811865476

Now that I got the coordinate frames all worked out. Its time ingrate the system into rtabmap and navigation stack.

Isn't odometry suppose to be in world coordinates? Thus x, y, z represent position in the world with x-east, y-north, and z-up. How about orientation? For my odometry is in body coordinates (i.e. going forwards is always positive pitch no matter what direction I'm going). Anyways, the odometry must be correct since the tf of my UAV maps out correctly in reference to the world frame.
Reply | Threaded
Open this post in threaded view
|

Re: Outdoor 3D mapping with GPS and 4 sets of stereo cameras.

fairuse
This post was updated on .
In reply to this post by fairuse
I was reading through the ROS REP documentation on standardization for various sensor types.

I stumbled upon REP 118 which states depth images should follow the Canonical Representation, which is 32 Float of depth values in meters along the z-axis.

The thing is that my depth sensor along with many depth sensors outputs a RAW 16-bit integer (not sure if my sensor int is signed or unsigned). I first need to investigate and see exactly what information my sensor is putting out.

Since the cv::Mat depth are CV_16SC1 format I'm using (int)depth.at<signed short>(x, y) to output the values. According to the SDK, the depth image has 4 fraction bits just like cv::StereoBM disparity maps. Thus I need to divide the value by 16 to convert the values into distance units.

Anyways, I just want to confirm that rtabmap follows the Canonical Representation for depth images, i.e. float 32 images, correct?