localization pose

classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

localization pose

aviadoz
Hello,

I have a misunderstanding about what is the propose of the odometry input for the rtabmap slam.
If the algorithm uses loop closure for localization, and I assume that when the robot detects where he is by high probability,
why the algorithm needs to get the odometry data? I did some tests and I noticed(by echo to /rtabmap/localization_pose topic) that only at points that the robot was there, in the previous map, he can localize itself.
I use encoders and IMU for odometry but I don't really understand where the algorithm uses it.
It works like it is on visual odometry mode but I noticed for the changes when I worked with visual odometry and when not (with the odometry with imu and encoders it was a little bit better).
So, if the robot is at point (2,2) in the 2d map by the odometry data and by the rtabmap algorithm is at (2.3,2,4) which one is more certainty. I assume that the loop closure is more certainty than the odometry so I need to "change" the odometry values to the rtabmap values. I know that is not the way I just want to describe the point that I dont understand.

Thanks,
Aviad
 
Reply | Threaded
Open this post in threaded view
|

Re: localization pose

matlabbe
Administrator
Hi Aviad,

The odometry input is required to adjust /map -> /odom frame on loop closure. It is assumed that anytime the robot starts in the environment, odometry will always start at (0,0), corresponding to TF /odom -> /base_link. But for example, in reality, rtabmap detects it is at (2,2) based on its previous map. rtabmap will then publish /map -> /odom TF to correct the odometry pose, so that /map -> /odom -> /base_link is (2,2). See REP105 for more info about map, odom and base_link frames. You don't have to republish the odometry poses with that correction. If any other node wants the pose of base_link in the map frame, it just queries TF for /map -> /base_link.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: localization pose

aviadoz
Hi,

I read REP105 and a lot of posts about this and I can't fully understand why we need the odom frame.
If the robot's position came from the realsense which produce the distance by itself why we need to publish to odom frame too? After all rtabmap is fixing the robot position every loop closure. It is like it don't care what is the information in the odom topic (or tf).

I know that is a general question but I will glad if you can help me with that.

Tnx,
Aviad
Reply | Threaded
Open this post in threaded view
|

Re: localization pose

matlabbe
Administrator
Hi Aviad,

Localization/loop closure detection may not occur at every camera frame because 1) the robot moved in area not previously mapped or 2) the environment changed too much to be recognized. In those cases, you need some sort of local estimation of the trajectory of the robot to estimate its pose (i.e., the odometry!) until a new localization / loop closure detection happens.

Other differences between localization (/map frame) and odometry (/odom frame) as stated in REP105:

------

odom

The coordinate frame called odom is a world-fixed frame. The pose of a mobile platform in the odom frame can drift over time, without any bounds. This drift makes the odom frame useless as a long-term global reference. However, the pose of a robot in the odom frame is guaranteed to be continuous, meaning that the pose of a mobile platform in the odom frame always evolves in a smooth way, without discrete jumps.

In a typical setup the odom frame is computed based on an odometry source, such as wheel odometry, visual odometry or an inertial measurement unit.

The odom frame is useful as an accurate, short-term local reference, but drift makes it a poor frame for long-term reference.

map

The coordinate frame called map is a world fixed frame, with its Z-axis pointing upwards. The pose of a mobile platform, relative to the map frame, should not significantly drift over time. The map frame is not continuous, meaning the pose of a mobile platform in the map frame can change in discrete jumps at any time.

In a typical setup, a localization component constantly re-computes the robot pose in the map frame based on sensor observations, therefore eliminating drift, but causing discrete jumps when new sensor information arrives.

The map frame is useful as a long-term global reference, but discrete jumps in position estimators make it a poor reference frame for local sensing and acting.
------

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: localization pose

aviadoz
This post was updated on .
Hi,

Understood. Many thanks.

So, I did some experiments in our lab and I noticed that when I used visual odometry only, the drift in odom frame was very little. When I used odometry by encoders and IMU the odom drift was bigger. The robot drove about 20m back and forth with turnings in the experiment. Also, when I did the experiment with visual odometry I can't complete the mapping because it was stacked in the turning.
Is it make sense that the drift was lower?

Thank you
Aviad