Burhanpurkar et al 2017 feeding obstacles_detection obstacles to rtabmap clarifications?
Hello:
I've recently read through the paper by Burhanpurkar et al in 2017 about the autonomous wheelchair. The following sentences were written:
> We utilize wheel odometry instead, even if it is slightly worse than VO in environments that are visually rich. To reduce odometric drift, 2D projected obstacles from consecutive graph (map) nodes are used to refine the pose-to-pose transformations. The 2D projected obstacle data is also employed to refine any identified loop closures.
Looking at the system architecture figure (fig 2), it shows:
Looking at the rtabmap docs, I can see a rtabmap_ros/obstacles_detection node which publishes an obstacles topic with a PointCloud2 message. How is this fed into rtabmap? Via scan_cloud? Then rtabmap treats it as if it is a 3D laser scan? Given that the paper says it's projected into 2D, is the obstacles point cloud projected into 2D (to a LaserScan message), and then passed into the scan topic?
Also, I'm not sure if I'm understanding the paper correctly. Is it correct to say that the 2D projected obstacles effectively compensating for the relatively large wheel odometry drift?
Re: Burhanpurkar et al 2017 feeding obstacles_detection obstacles to rtabmap clarifications?
This post was updated on .
In this other thread, I was pointed to the NeighborLinkRefining. That definitely looks like the pose-to-pose transformation refinement discussed in the paper. Is that correct? I'll give it a try in the coming days/weeks.
Re: Burhanpurkar et al 2017 feeding obstacles_detection obstacles to rtabmap clarifications?
Administrator
Hi,
This is a combination of RGBD/NeighborLinkRefining=true, Reg/Strategy=1, Reg/Force3DoF=true and by feeding the 2d obstacles as point clouds (subscribe_scan_cloud=true) to rtabmap.