Losing odometry when doing fixed point scanning
Posted by
scanboy on
URL: http://official-rtab-map-forum.206.s1.nabble.com/Losing-odometry-when-doing-fixed-point-scanning-tp4336.html
Hello,
I have captured images by spinning an ASUS Xtion Pro on a tripod stopping every 45 degrees (I have fixed stops so it is accurate). First lap the camera points straight forward, then the next lap I'm pointing the camera up by 45 degrees and on the 3 lap I'm pointing the camera down by 45 degrees. Hence capturing 8*3=24 overlapping RGB images and the equal number of registered depth images.
I want to run these as a dataset in RTABMap and yes I have read
http://official-rtab-map-forum.206.s1.nabble.com/How-to-process-RGBD-SLAM-datasets-with-RTAB-Map-td939.html and the other guides and have successfully executed the freiburg examples.
What I find is that RTABMap loses odometry very fast, it can't match features even between the two first frames. I have tried many combinations of Feature Matching algorithms and tried Frame->Frame and Frame->Map, Visualization and Geometry. (I have checked that the features in images match match outside of RTABMap using opencv) What am I fundamentally doing wrong here? I feel that I have enough overlap and features.
If I make my own odometry (raw x,y,z,roll,pitch,yaw) file as I know my camera positions RTABMAP can run successfully. But I still feel that the point cloud and the meshes have a lot of artifacts, like many small stray (with lack of better word here) faces/vertices. I have some gut feeling that this could be related that no feature matching has not been done in this case.
So my questions are:
1) How can I make RTABMap not lose odometry/tracking?
2) The artifacts are they from the lack of Feature Matching, if not why as I have photos that are correctly lit, exposed well and sharp?
Thanks!