With the iOS version, we use ARKit's VIO approach, which is more robust than VO approaches inside RTAB-Map library. It is also similar in performance than Google Tango's VIO. Using rtabmap_ros, you may find an Open Source VIO approach that could be fed to rtabmap node, to produce similar results. For iOS, it is just so convenient to have already state of the art VIO with synchronized LiDAR sensor, powerful computer, screen and battery that you can hold with one hand.
With other cameras, there are ways to make it close to ARKit, but careful configurations should be done. For example, with D435i, use IR+Depth mode with IR emitter disabled. Note also that stereo cameras cannot give as accurate point clouds than the TOF (LiDAR) camera on iPhone.
With cameras like L515 or Kinect Azure (having TOF camera), you can do ICP odometry, which in some cases is
pretty accurate.
cheers,
Mathieu