Depth precision in mapping

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Depth precision in mapping

kazu
Hi,

I'm Kazu who asked question how to work with laser scanner and Kinect (without odometry) previously.  
This time I have a question regarding to depth precision in mapping.  Shortly, the precision is poor and I want to know if it is usual or not.  Here is the detail; let's say we have a dead end at the end of hallway; when I get close to the dead end, I always have multiple layers.  The wall appears closer when I get close to the wall.  For robotic application, I don't think this is critical, but do you have same issue?
My suspect is this precision error is due to either Kinect depth precision or TF settings.

Thanks,
Kazu
Reply | Threaded
Open this post in threaded view
|

Re: Depth precision in mapping

matlabbe
Administrator
Hi Kazu,

Can you post a screenshot of the problem? Don't know if I understand the "multiple layers" you are seeing. Do you filter the point cloud from the Kinect to a maximum depth? The Kinect's point cloud over 4 meters have really poor precision, so if you are moving toward a wall for which some point clouds were taken from 10, 8, 6, 4, 2 meters are superposed, the wall would be very poorly rendered.

Also, if you map using the 2d laser scanner, the rendered point cloud from the Kinect may be not correctly aligned with the scans (the Kinect is not calibrated or there is a bad synchronization of the messages).

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Depth precision in mapping

kazu
Hi Mathieu,

Yes, sure; here is the screenshot of the wall (top view).  As can be seen here, I have several layers instead of simple plane.



I think the problem is Kinect itself as I set the maximum depth of 6.0 meters.  Poor precision of Kinect is documented in paper, so I guess we can't help.

Thanks,
Kazu
Reply | Threaded
Open this post in threaded view
|

Re: Depth precision in mapping

matlabbe
Administrator
Hi Kazu,

Yes, this is a limitation of the Kinect. The Kinect v2 has much better depth precision (up to 10-12 meters as I've seen), but less portable than an Xtion Pro Live. It may worth a try depending on your application. Otherwise, you can still filter the depth from the Kinect under 4 meters to have better precision in the map.

Another event that can create multiple layers like you are seeing is if the odometry is poor. If the robot moves toward a dead end and there are some odometry errors in translation, the depth may not be aligned all together. Example: the robot is at 6 meters from the dead end wall, it moves toward it. At first it takes an image of the wall, which is at 6 meters. It moves 3 meters toward the wall but has a big odometry error of 1 meter, so it thinks it moved 2 meters instead. However, the wall is seen at 3 meters in front of the robot instead of 4 meters accordingly to odometry. Well, there will be 1 meter error on the wall cloud (appearing at 5 and 6 meters from the original position).

cheers,
Mathieu