We're hoping to use RTAB-Map to allow our robot to autonomously navigate small indoor spaces. Broadly, as the robot identifies people and predefined objects of interest and moves to interact with them, we would like to use RTAB-Map to help us avoid or route around potential obstacles when necessary. We are anticipating two main scenarios:
The robot is equipped with stereo cameras, forward-facing sonar, wheel odometry, and two IMU sensors (no laser rangefinder, sadly). We're planning to augment the visual system with Google Coral hardware to rapidly identify objects and people in the environment. We have been planning to use a graph-based map, with points of interest marked as attractors and obstacles marked as repulsors / repellors, for several reasons:
Firstly: Is RTAB-Map an appropriate choice for these use cases? Is there a more lightweight or a more appropriate tool for either scenario? Secondly: Is it possible to add points of interest and obstacles as attractors and repulsors in the graph as described (or is there a better way of achieving our goal)? I have seen that we should be able to pass semantically segmented images to RTAB-Map, but I'm not clear if it's possible to also pass the associated object categories and use this data to update the map graph? It looks as if it is possible to add AprilTags to the graph as landmarks so I'm hopeful a similar method exists for arbitrary objects? Any advice or discussion is greatly appreciated — thanks! |
To update my own post from additional research — it looks as though the only methods for adding arbitrary landmarks to the graph are by using the user_data topics (as outlined here) or by hijacking the AprilTag detection system and 'faking' a tag detection (as described here).
Am I correct I my understanding of this, or is there another way to add landmarks to the graph at this point? |
Administrator
|
Hi,
The trick is to use the tag_detections input. For example, we also used it with a CNN approach detecting specific object's pose, by complying to apriltag format. After that, you may check if your landmark detector detect position with or without rotation. Without rotation, you would have to set 9999 the angular parameters of the covariance matrix for each pose. If your landmark detector doesn't know the error, for convenience with can fix it with those parameters (again, set landmark_angular_variance to 9999 if orientation of the landmark is not estimated). Last note, there is no mechanism to detect wrong landmark detection and landmarks should be all unique. cheers, Mathieu |
Administrator
|
In reply to this post by TACD
For your lightweight question, with external odometry, rtabmap node alone can use little resources (RAM and CPU). Typical application would be to map the target environment first without people walking around, then launch rtabmap in localization mode and navigation stuff. For dynamic obstacles avoidance, this would be mainly handled by move_base, as rtabmap would only have to give it the static map and the current localization, similar to this setup: http://wiki.ros.org/rtabmap_ros/Tutorials/MappingAndNavigationOnTurtlebot
cheers, Mathieu |
Free forum by Nabble | Edit this page |