Few questions regarding stereo mapping with RTAB ROS

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

Few questions regarding stereo mapping with RTAB ROS

stevemartinov
I want to perform SLAM, localization afterward and collision avoidance using RTAB + Stereo Camera. I hope you don't mind to answer a few questions:

1. I plan to use this Intel camera: LINK.
Do you think this is a good choice as it has such a wide FoV with global shutters?

2. From my understanding, I need to provide "odom" topic. From your launch example here, I can see that I can set "visual_odometry" param to True. So am I correct saying that just below, the parameter "odom_topic" is directly related to visual_odometry? And am I correct saying that I can set the topic to "visual_odom" and fuse it together with my wheel odometry and provide altogether as "odom"?

3. I can see that RTAB map requires stereo camera input to be approximated to depth images. How accurate is this algorithm? Can I use it as a collision avoidance?

4. I plan to place the camera 5 cm above ground so there will be a ground (floor) covering around 30-40% of horizontal FoV of the camera. Is that desirable?

Thank you.
Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

matlabbe
Administrator
Hi,

1. Yes, it seems a good camera for robust visual odometry.

2. This camera may comes already with a visual-inertial odometry approach, you could set visual_odometry to false and use the odometry topic coming from intel camera (set as odom_topic). "visual_odometry" to true means that rtabmap_ros/stereo_odometry node with be created to compute odometry, otherwise topic set in odom_topic is used as odometry to rtabmap node. To fuse with wheel odometry, you would have to set publish_tf to false for stereo_odometry node, maybe better to use the example here instead of the launch file to better change the parameters.

3. RTAB-Map generates depth image from stereo using cv::StereoBM algorithm. It is relatively safe as long there are textures on the objects. A white wall would not be detected.

4. Ideally, we would want to be able to track features everywhere in the FOV of the camera. It would still work though.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

stevemartinov
This post was updated on .
Thank you for the reply.

Regarding the white wall collision avoidance. If the FoV always detects the floor wouldn't algorithm detect the white walls as an obstacle then? I know that if the camera comes too close to the wall (when FoV only detects while walls and noting else) it will fail, the same as Lidar would do.

But what if the environment is featureless (like an industrial plant, where the area is big with many dynamic objects) but the robot will always see two yellow lines like on a road, will it help to always detect the yellow lines as features?
Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

matlabbe
Administrator

A stereo camera will see the ground only if it is textured. As the ground is not an obstacle, it is not bad most of the time to not "see" it, unless there are stairs or holes. For white walls, a RGB-D camera or a lidar should see them, unless they are very close. For a stereo camera, it will just not see them independently of the distance (or the walls will be very distorted like with ZED's depth estimator).

The yellow lines are not "friendly" for a feature-based odometry or localization. You would need another vision approach to detect those lines.
Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

stevemartinov
Yes, you are right but I do not really need the visual odometry as I will feed wheel odometries and fuse visual odometry.

When I say "yellow lines" in plants I meant this kind of environment:
LINK

Or this environment:
LINK

Do you think the stereo camera will be a good choice for autonomous navigation? The environment seems to have a lot of features.
Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

matlabbe
Administrator
This is what I had in mind. You could mix with a line detector using even only a mono camera, similar to car lane detection.

That kind of environment is difficult for visual and lidar localization, as it would change overtime, make it very challenging to re-localize if all the visual features have changed or if all the geometry has changed (for lidar localization) because boxes have moved. You would need a precise odometry between places/waypoints that are not changing too much so that the robot can relocalize.

In the second video, they use wireless transmitters to triangulate the agv position. A more low cost approach could be to use ArUco Markers or Apriltags for localization, a single camera would be required.

cheers,
Mathieu

Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

stevemartinov
This post was updated on .
Yes, I do realise that, but the question is, can I use stereo for mapping
and obstacle avoidance in such environments?

And also, would you say that rtabmap stereo is a state of art SLAM?
Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

matlabbe
Administrator
Hi,

Yes, it can work with the limitations I raised. See this post (stereo SLAM in this kind of environment), some of them are discussed.

There are more accurate Visual SLAM approaches out there when we look only at the accuracy of the trajectory estimation, in particular for visual odometry, but they are often monolithic and cannot be configured with an external odometry approach (e.g. wheel odometry) like RTAB-Map to be more robust in non-friendly vision environments.

It doesn't mean they are incompatible with rtabmap, in the paper RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation we evaluated some "visual odometry" approaches that could be integrated with RTAB-Map (see tables 3-7, F2M is the default one in rtabmap):
vo ---- /odom ---> rtabmap

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

stevemartinov
This post was updated on .
Cheers, that is so informative! I wasn’t that confident using rtabmap with
stereo but after a couple of days discussing with you I got a lot of
understanding.

The link you sent me, I saw that you were suggesting to tune rgbd
parameters, is that for stereo too? (As you mentioned, the user had stereo
camera and the same environment)

Reply | Threaded
Open this post in threaded view
|

Re: Few questions regarding stereo mapping with RTAB ROS

matlabbe
Administrator
Hi,

In that post, RGBD/DepthAsMask is ignored for real stereo image input. The user was using a ZED camera, but he was using the depth image generated by ZED sdk, so the input to rtabmap was RGB-D, not stereo. For RGBD/LocalBundleOnLoopClosure, it can be used with stereo too.

cheers,
Mathieu