Hi Mathieu,
I use node RGBD_ODOMETRY to find the position of camera ASUS Xtion Pro live. I want to ask you if that node can provide the angles and axes velocity. I need those informations for my robot_localization node. Thank you so much! 
Hi vmhoang68,
If it can be of any help, I'm currently using the robot_localization EKF node to fuse wheel and rgbd odometry, too. I work in planar mode (nonholonomic robot) and for both I set EKF to get only XYH without linear neither angular velocity, because with them I had a much worse accuracy performance. The biggest imrprovement was then obtained by getting the "heading" from an IMU sensor and sending it to the EKF, too.  Guido
~Guido

Administrator

I added (see this commit) the twist values in the published odometry messages. For the twist covariance values, they are still null since I don't know what to put there. If someone has an idea how to set them, tell me!
cheers 
Shouldn't they be similar to how you measured the covariances of the XYZ pose inversely proportional to odometry quality? I think velocities can be similarly affected by the number of tracked features, am I right?
~Guido

Administrator

This post was updated on .
Yes, lower quality means higher variance. I could copy the pose covariance in the twist covariance, but I am not sure what an external system really needs about the twist covariance. Even for the pose covariance, there are two approaches to set it in rtabmap odometry:
* Odom/VarianceFromInliersCount: (default false) Set variance as the inverse of the number of inliers. Otherwise, the variance is computed as the average 3D position error of the inliers. You will get (if false (absolute): v = "xyz variance between 3D point inliers", if true (relative): v = "1/inliers") Cov(x,y,z,roll,pitch,yaw) = [v 0 0 0 0 0; 0 v 0 0 0 0; 0 0 v 0 0 0; 0 0 0 v 0 0; 0 0 0 0 v 0; 0 0 0 0 0 v] There could be a third "relative" approach based on reprojection error. In both approaches, the same variance is set to all components (only diagonal though). Well, with the first approach, I could be more specific by computing a variance for each xyz component (but not for rollpitchyaw). For graph optimization, TORO is using only the highest variance of xyz for the translation variance and highest variance of rollpitchyaw for the rotation variance. I think g2o and GTSAM use all components (the whole covariance matrix instead of only the diagonal). For packages like robot_pose_ekf or robot_localization, it would be good to know exactly which kind of covariance they want, relative (variance can be any scale, but should be consistent between odometry messages to tell if one is better than other) or absolute (in meters and radians for example). viso2 sets a fixed pose and twist covariances. fovis computes the covariance using a Jacobian, then sets only twist covariance. cheers, Mathieu 
This post was updated on .
Well, I'm currently using robot_localization and the suggested approach is to feed the filter with linear velocities instead of positions and let it integrate over time. This is particularly suited for differential measurements affected by drift error like visual or wheels odometry. In the updated Twist message published by rgbd_odometry, I was about to suggest to use viso2 arbitrary values (about 0.10 and 0.15 for pose and orientation and 0.05 and 0.10 for XYZ and RPH twist velocity), but I don't know why they have been chosen like that neither if it is correct to use the same scale for both pose and velocity. Yes, I saw that parameter, I thought it would be better to leave it to "false", it seems more correct to me. Ok, it would be interesting to see that. To my knowledge, "robot_localization" uses absolute variances, i.e. meters for displacements and radians for rotations. However, let me tell you (and other people trying this approach) that from my experiments even if the fused odometry can be more accurate, the map built using a Kalman filter is worse than the one built using only one odometry source (visual or encoders). This is caused by the intrinsic delay of the Kalman filter, that is not a fixed time delay (the filter rate is correct and synchronized), but is caused by the internal model that has a nonzero adaptation time to measurement changes. I don't have a screenshot at hand, however the 3D cloud clearly shows many "ghosts" and misaligned parts caused by the fact that the RGBD data is passed to rtabmap is in realtime, while the odometry coming from the filter has a different inertia. I hope I explained this well despite my poor English... :) The solution I am using trying right now is to implement a node that takes wheel, visual and IMU odometries as input: if visual odometry is not null, the ouput to rtabmap is [X,Y,H] (X and Y from visual odometry and H from IMU); if a null pose is received (i.e. tracking lost), wheel odometry together with IMU is used until a new valid visual odometry is received. In the coming days, I will post the results obtained with this approach, ok? Thanks, Guido
~Guido

Free forum by Nabble  Edit this page 