Handling lost visual odometry when using sensor fusion (robot_localization)

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Handling lost visual odometry when using sensor fusion (robot_localization)

BrettHemes
Hi,

I am trying to integrate various additional sources of odometry into rtabmap and have been using the sensor_fusion.launch file as an example.

I have everything working in well-behaved scenarios but have issues when the visual odometry fails.  When using the default mapping example the null output from the visual odometry results in a mapping pause with a red screen, however, when using robot_localization the null output (with BAD_VARIANCE) is consumed and filtered resulting in non-null output into rtabmap.  In this case I end up with some non-zero translational velocity from the filter and the mapping spirals (literally) out of control.

My question is: is there a suggested/preferred way of handling "lost" odometry components in the sensor fusion case?

My current thought is to put an additional node that takes in the filtered odometry and checks the visual odometry for the lost condition and then feeds rtabmap with null/ekf-output accordingly... but I am concerned that the EKF may still suffer from the null inputs with arbitrarily high variance.  Should the variance perhaps be set to -1 instead of the current 9999 constant?

I have some other ideas as well but wanted to at least check in here to see if others have experienced/solved the issue or even if the functionality is already in there and I am just missing some parameter settings.

Thanks,
Brett
Reply | Threaded
Open this post in threaded view
|

Re: Handling lost visual odometry when using sensor fusion (robot_localization)

matlabbe
Administrator
Hi Brett,

high variance of 9999 means that robot_localization would "ignore" that entry. I am not aware of this, but is -1 covariance make robot_localization ignoring the corresponding value? Note also that you can disable visual odometry to publish null odom topic by setting parameter "publish_null_when_lost" to false for rgbd_odometry/stereo_odometry node.

If you have another source of pose estimation, it would be used to continue to estimate the filtered pose. If I take Google Tango for example (visual inertial odometry): if we obstruct the fish eye camera, the filter still estimate the position of Tango using the IMU for ~1 second, then it stops (as it would continuously drift using only IMU) and the filter should be reinitialized when features can be extracted from the camera again. Not sure how to achieve a similar behavior with robot_localization, maybe by blocking all inputs to robot_localization when visual odometry is lost. Thus to answer your main question, it is indeed no.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Handling lost visual odometry when using sensor fusion (robot_localization)

BrettHemes
Thanks for the feedback.  I won't have time to work on this for a couple of weeks but will try and update this when I figure it out.

Brett