Thanks Mathieu! Appreciate the response a lot!
Okay, so the visual axis is only related to the camera pose and not real world related. I must be then that the model itself, with the bending, is causing the issue when I'm importing it to Pointcab. I work on this to improve my scanning by returning to this pose to align the X & Y (ground) plane again. I think I always just returned this area with camera to wall after a few frames to match that ID and not the first pose! Can you also kindly explain to me what exactly causes drift, regardless of correcting it with loop closures. What I mean is, what calculation or movement or whatever causes the drift error? Is it because of the sensor having accuracy issues (the 1% of Xmeters) combined the tracking points it creates? Or is something else completely? I see that you and some other members include an additional 2d Lidar sensor as well. Can this setup be incorporated in the standalone version as well? I assume the lidar then takes control of the odometry or am I wrong? If so, is this not a more accurate procedure for odometry? I'll make a cloud file and drop you the link... |
Administrator
|
Hi,
The drift is caused mainly by sensor noise. With a perfect sensor (like in simulation), there could be very little drift (caused by floating number precision). Some sensors are more noisy than others. For example, with a Xtion, drift will be lower if features tracked are under 4 meters (where the sensor is more accurate). The lack of good visual features to track can also generate more drift. To deal with inherent noise of the sensor, some visual odometry approaches can have algorithms better than others to estimate robustly the trajectory. Google Tango has visual inertial odometry far better than visual odometry of RTAB-Map. If you can get hands on a Google Tango-enabled Phone (Asus ZenFone AR or Phab2Pro), you may try RTAB-Map app on Play Store to see the difference. The standalone doesn't allow the use of LiDAR, only on ROS (rtabmap_ros). Indeed, a long-range LiDAR can be used to decrease a lot odometry drift. Note that when using rtabmap_ros, you can use the odometry approach you want (if it is ROS compatible) as input to RTAB-Map. cheers, Mathieu |
Hi Mathieu
Thanks for the explanation. Makes complete sense! Is there a setting where I can limit the tracking point distance so that the software discards anything beyond 4m? I did find settings of distance but I think that was cloud creation max & min distance. I also saw that Occipital (Structure Sensor) ads a wide angle lens with their app, Canvas, on the Ipad camera, do you think the same setup on the Xtion may work for tracking point increasing or will it screw up the system? Yes, I had a look at those phones, but to import them to South Africa is a very costly exercise as they are not available over here yet. Might be in the near future hopefully. Is it maybe possible to utilize a tablet's IMU and incorporate this into RTABmap to basically copy the setup of the Google Tango enabled phones. Would probably require some serious programming though and only in the ROS software I assume.. I'll try to get my brother to help me with RTAB-map ROS cause I don't understand and know nothing about programming cause it seems that there are more possibilities within the ROS software. And I really like the concept of incorporating the LiDAR for odometry as well. |
Administrator
|
Hi Tertius,
To limit the distance of extracted features, see Preferences->Visual Registration->Maximum feature depth. A recalibration of the Xtion would be required, but RTAB-Map won't use the extra pixels of RGB camera not in field of view of the depth camera, so you won't see any difference. Tango has a lot more than normal phones: Fisheye camera, fisheye camera is hardware synchronized with IMU, depth camera. Not to mention a proprietary visual inertial odometry approach. You will then save a lot of money/time buying directly a Tango phone than developing yourself. Currently the nearest you can have from Tango is the ARKit with iPhone, though no depth camera is on the iPhone (yet). You may search on the web if people did integrating ArKit with an Occipital structure sensor (if you don't want to use occipital motion algorithms). The ARKit would use synchronized IMU + camera to estimate motion (again proprietary visual inertial approach). cheers, Mathieu |
Thanks Mathieu!
Yes, makes sense what say about about the field of view on the depth camera. I was hoping that the field of view is much larger than the RGB camera. I will have a look at the tango phone again, like you say, it may a cheaper solution to just get one and get it over with than struggle to make something work with my limited capabilities, just a damn shame they're not available overhere in SA yet. I'll also research the ARKit you mentioned... |
Free forum by Nabble | Edit this page |