If you want to detect planes in point cloud, you could check plane segmentation:
https://pcl.readthedocs.io/projects/tutorials/en/master/planar_segmentation.html#planar-segmentationSome objects could be segmented after removing the planes:
https://pcl.readthedocs.io/projects/tutorials/en/master/cluster_extraction.html#cluster-extractionHowever, point clouds from D435i may be noisy farther from the camera. You may limit the range of the depth when generating the point cloud of the map.
Other option could be to segment plane/chairs in 2D images using some NN approaches, so that you can know the "class" of 3D points generated from depth image. You would have to regenerate the 3D point clouds from RGB-D data published by rtabmap on your side though.
From
Kimera semantic approach (
https://arxiv.org/pdf/1910.02490):
the 2D semantic labels can be obtained using off-the-shelf tools for pixel-level 2D semantic segmentation, e.g., deep neural networks [7]–[9], [64]–[69] or classical MRF-based approaches [70]
cheers,
Mathieu