I am currently working on a system with two Realsense D435i cameras looking in opposite directions from each other. They are connected by a flexible joint with one (limited) rotary degree of freedom, so the FOVs never overlap. I can calculate the relative position of the two cameras by comparing the IMU data of the two.
However, this means that the link between the cameras is not fixed throughout the mapping process. Does this have a negative effect on the loop closure or any other part of the algorithm?
I appreciate any advice you can provide.
If you update TF between the cameras and base frame, rtabmap can know where the cameras are from the base frame at a particular timestamp, so visual odometry or loop closure detection can work. I doubt more about how accurate can you know the relative position between the cameras.
Thank you for your reply.
As you mentioned, it is hard to know the exact relative position between the cameras. To avoid this issue, I was thinking of running two separate SLAM algorithms and merging the generated maps with the multi session mapping feature. Since I am quite certain that this will not work if the cameras are always pointing in nearly opposite directions, I am planning on turning the robot 360° regularly. This should provide an overlap in the viewing angles.
However, I guess that especially for longer mapping processes this won't deliver good results due to different drifts, which are accumulating in the separately generated maps. Please correct me if you think I am wrong or have any other advice how I could fix this problem without knowing the relative position of the cameras with 100% certainty.