I am trying to perform SLAM navigation using the Azure Kinect on my robot. I am therefore using the slam_rtabmap.launch launcher provided by the Azure Kinect ROS Driver. How can I tune parameters for obstacle detection?
My "obstacles" are the crop rows of an agricoltural field, just like described in this post. You suggested me to properly se the Grid/NormalsSegmentation and Grid/MaxGroundHeight parameters. How can I do it using the azure kinect ros driver?