I'm working on a research project where we will have a fleet of robots in a logistics environment.
The robots have some nice sensors on them that can on-the-fly calculate ORB-features from whatever they see.
I know that you can build rtabmap with ORB and I did some thinking - we could drastically reduce network overhead by just streaming the features to RTAB-Map instead of sending the whole image/pointcloud.
I tried to dig in the code and find a way to make this work, but I couldn't find an obvious entrypoint.
Has somebody done this before?
Or has anybody any idea where the feature points are passed to the mapping process?
You can publish a rgbd_image msg to rtabmap node with only key_points, points and descriptors fields set, without the images. However, the local occupancy grid won't be created if there are no images received or scans. If you need the occupancy grid, you could send a downsampled PointCloud2 to rtabmap node (subscribe_scan_cloud:=true) or a decimated depth image inside the rgbd_image topic (while adjusting the rgb camera info to match the depth resolution).