Mapping from two PointCloud sources + One RGB camera
Posted by srijal97 on URL: http://official-rtab-map-forum.206.s1.nabble.com/Mapping-from-two-PointCloud-sources-One-RGB-camera-tp9622.html
Hello!
I have a drone with two ToF depth sensors onboard, publishing both pointcloud and depth image. There is also an RGB camera onboard, and I have camera_info topics for all these sources.
First, I have combined the two pointcloud sources using:
<!-- Node which combines up to four pointcloud sources into a single stream --><nodename="cloud_aggregator"pkg="rtabmap_util"type="rtabmap_point_cloud_aggregator"><remapfrom="cloud1"to="/tof_left_pc_fixed"/><remapfrom="cloud2"to="/tof_right_pc_fixed"/><remapfrom="combined_cloud"to="/tof_combined_pc"/><paramname="count"type="int"value="2"/><paramname="frame_id"type="string"value="hires"/><paramname="fixed_frame_id"type="string"value="odom"/><paramname="approx_sync"type="bool"value="true"/></node>
Note that hires is the RGB camera frame ID, and ToF pointcloud data /tof_left_pc_fixed and /tof_right_pc_fixed are in left_tof and right_tof frames respectively. I figured that having the combined pointcloud output in hires would make combining it with color information from the RGB camera easier.
I then passed this point cloud to RTAB-Map with the following arguments:
RTAB-Map creates an uncolored map, and I would now like to include color information from the RGB camera. The combined pointcloud source has a broader field of view than the RGB camera, so I am unsure how to approach this problem.
Would I have to "crop" out the pointcloud somehow to make it see the same pixels as the RGB camera? Is there a way to partially color the map where RGB and pointcloud pixels overlap?
I have also tried using rtabmap_utils/pointcloud_to_depthimage to convert my combined pointcloud to a depth image, but this does not work and instead throws the error. Here are the launch arguments:
<!-- Node which reprojects a point cloud into a camera frame to create a depth image --><nodename="cloud_to_depth"pkg="rtabmap_util"type="pointcloud_to_depthimage"><remapfrom="cloud"to="/tof_combined_pc"/><remapfrom="camera_info"to="/hires/camera_info"/><remapfrom="image"to="/tof_combined_pc/image"/><remapfrom="image_raw"to="/tof_combined_depth/image_raw"/><paramname="decimation"type="int"value="1"/><paramname="fill_holes_size"type="int"value="0"/><paramname="fill_holes_error"type="double"value="0.1"/><paramname="fill_iterations"type="int"value="1"/><paramname="fixed_frame_id"type="string"value="odom"/><paramname="approx"type="bool"value="true"/><paramname="wait_for_transform"type="double"value="0.2"/></node>
Even though fixed_frame_id is set to odom, I still get this error (it should be looking for hires to odom but the error shows hires to ):
[ WARN] - /cloud_to_depth: [1692077220.677414292, 1691442532.882630568] Could not get transform from hires to after 0.200000 seconds (for stamp=1691442532.544901)! Error=". canTransform returned after 0.201555 timeout was 0.2.".
I am not much experienced with image processing/handling so it probably does not make sense to convert this combined pointcloud to a depth image because its too wide? Given my sensors, I would love to get some suggestions on how RTAB-Map should be structured. If it helps, here is a youtube video showing the pointcloud map created using RTAB-Map and a view of my RGB camera: