Hi,
If your stereo camera is RGB, you can pass the left frame as color to rtabmap and the resulting point cloud will contain color. Otherwise, you can generate the disparity using stereo_image_proc package (it is using same disparity algorithm that rtabmap uses for stereo). I think it also generates a depth image and a point cloud directly too. However, you cannot just feed to rtabmap that depth image and the RGB image taken from a different camera. You need to register the depth image to the RGB camera. To do so, you can feed the point cloud to rtabmap_ros/pointcloud_to_depthimage node (see ros1 doc
here, which matches ros2 implementation). You will then get a depth image registered to your RGB camera, then feed them to rtabmap.
However, if you don't really care about the color, I would stick to stereo pair images directly! For example, if you have a D435 camera, I don't recommend to use the color camera, as it doesn't have a global shutter (images are very blurry!) and FOV is smaller than the IR stereo cameras... in other words, visual odometry is a lot worse with RGB camera than using the stereo IR cameras. Note also that the external RGB camera may not be hardware time synchronized with stereo camera, adding more problems.
cheers,
Mathieu