Referring to first post similar to that
setup or this
one used at the
IROS2014 Kinect challenge, if you have already good wheel odometry (maybe a mix of IMU+encoders using
robot_localization), you don't need powerful computer (as low as a RPI) to get same results. The RAM size and CPU power would then dictate how long you can slam online (or the maximum map size that can be held in RAM), but quality should be similar.
One issue with RPI-like computers is sometimes to get the camera driver running at a decent rate / resolution without using 100% CPU or having USB issues. Ideally, if you have a camera computing depth onbard the camera, you will same downstream CPU. If so, by setting rtabmap to subscribe to wheel/IMU odometry and to 1 Hz images, you can run easily on RPI4 or RPI5 + Arduino for motor control and imu/encoders readings.
I've seen RTAB-Map running decently on systems like Qualcomm Flight Pro / Snapdragon Flight, integrating their own optimized VIO. If any embedded system can handle VIO in real-time, RTAB-Map doesn't need much more to run, as shown on RTAB-Map's ports on Google Tango, Android and iOS.
The next level of computation would be Jetson AGC (xavier/orin) or intel Nuc (to keep it mobile, unless you don't care to put a laptop on the robot). The later may be easier to use with many open source approaches that are mostly implemented on CPU only. Note that I am
currently working on better integration of RTAB-Map with jetson/nvidia CUDA (note that
nvidia has their own GPU optimized visual odometry approach that can be integrated with RTAB-Map).
Then for the choice of the best stereo/RGB-D camera or lidar sensors... then it is another topic!
cheers,
Mathieu