Hello,
I'm trying to find documentation on what rtabmap actually uses under the hood for stereo camera image depth calculations. There are two configurations I could find when using stereo cameras (
http://wiki.ros.org/rtabmap_ros/Tutorials/SetupOnYourRobot):* Something (not stereoBM) happens inside of RTABMAP directly, and RTABMAP does the dense stereo calculations only periodically (default 1Hz) for mapping. RTABMAP consumes two rectified images, their intrinsics, and relative odometry. Called "Stereo A".
* Stereo processing happens outside of RTABMAP, and rtabmap consumes RGBD data (depth image, rectified image, and camera intrinsics) and relative odometry. Called "Stereo B".
What I'd like to know is what algorithms RTABMAP is using in the "Stereo A" to extract 3D data to perform whatever feature detection algorithm desired. Is it using some optical flow algorithm? Some other very fast but sparse stereo algorithm?
I've been digging through documentation but cannot find anything on this. I'm trying to identify strengths and weaknesses between running the RTABMAP feature detection direclty on dense stereo output (high CPU usage) vs on whatever it's doing under the hood (low CPU usage).