Posted by
g.bartoli on
URL: http://official-rtab-map-forum.206.s1.nabble.com/Angles-and-axes-velocity-informations-from-rgbd-odometry-node-tp924p963.html
matlabbe wrote
Yes, lower quality means higher variance. I could copy the pose covariance in the twist covariance, but I am not sure what an external system really needs about the twist covariance.
Well, I'm currently using
robot_localization and the suggested approach is to feed the filter with linear velocities instead of positions and let it integrate over time. This is particularly suited for differential measurements affected by drift error like visual or wheels odometry.
In the updated Twist message published by rgbd_odometry, I was about to suggest to use viso2 arbitrary values (about 0.10 and 0.15 for pose and orientation and 0.05 and 0.10 for XYZ and RPH twist velocity), but I don't know why they have been chosen like that neither if it is correct to use the same scale for both pose and velocity.
Yes, I saw that parameter, I thought it would be better to leave it to "false", it seems more correct to me.
matlabbe wrote
There could be a third "relative" approach based on reprojection error. In both approaches, the same variance is set to all components (only diagonal though). Well, with the first approach, I could be more specific by computing a variance for each x-y-z component (but not for roll-pitch-yaw).
Ok, it would be interesting to see that.
To my knowledge, "robot_localization" uses absolute variances, i.e. meters for displacements and radians for rotations. However, let me tell you (and other people trying this approach) that from my experiments even if the fused odometry can be more accurate, the map built using a Kalman filter is worse than the one built using only one odometry source (visual or encoders).
This is caused by the intrinsic delay of the Kalman filter, that is not a fixed time delay (the filter rate is correct and synchronized), but is caused by the internal model that has a non-zero adaptation time to measurement changes. I don't have a screenshot at hand, however the 3D cloud clearly shows many "ghosts" and misaligned parts caused by the fact that the RGB-D data is passed to rtabmap is in realtime, while the odometry coming from the filter has a different inertia. I hope I explained this well despite my poor English... :)
The solution I am using trying right now is to implement a node that takes wheel, visual and IMU odometries as input: if visual odometry is not null, the ouput to rtabmap is [X,Y,H] (X and Y from visual odometry and H from IMU); if a null pose is received (i.e. tracking lost), wheel odometry together with IMU is used until a new valid visual odometry is received. In the coming days, I will post the results obtained with this approach, ok?
Thanks,
Guido
~Guido