I am really unsure on the best way to proceed with our two camera setup. We are using ROS for this.
Currently, I am flying a drone that has one d435i camera facing forward feeding into an RTABMAP instance running stereo slam and another d435i facing backwards feeding into a different RTABMAP instance running rgbd slam. Right now, we are using the front facing camera for its odometry and the back facing camera to perform segmentation. our TF tree looks something like this:
The ukf filter is currently only taking in the flight controllers IMU and the front camera's odometry to estimate the odom->base_link transform.
Right now, if i want to use both point clouds in the map space, I just call both cloud_maps and add them together and display them. However, this results in the back camera's cloud being much more muddled than the front cameras. I suspect this has something to do with the fact that only the front camera's is being used to estimate the current odometry.
I was wondering if there is a better way we should be combining the two RTABMAP instances. I'm not attached to our current way of doing it so if anyone has a suggestion that involves completely redoing how it's setup, I'm all ears!
One idea could be to use stereo_odometry with front camera data in base_link frame, and feed it to one rtabmap instance using RGB-D data of the back camera. Do you really need a stereo rtabmap instance?
Depending of the computation power available, you may even try doing one stereo_odometry with the two camera stereo streams, and one rtabmap subscribing to that odometry and both RGB-D data streams from the front and back cameras. In that mode, the IR emitter would need to be disabled on both cameras.
The problem of having two rtabmap instances at the same time using same Tf tree is that `map`->`odom` will be likely be different from the back and front, and two `map` frames would be published at the same time for same child frame (if one or the other is not disabled), which will create errors in TF.