How can I get xyz co-ordinate of an pixel using kinect depth output?

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

How can I get xyz co-ordinate of an pixel using kinect depth output?

maogic
Hey Mathieu and all:

I'm trying to create a 2d map including interested object coordinates information. What I'm thinking is using RTAB-Map mapping the environment and using 2d detection(YOLO) detecting interested object at the same time. Once interested object is found, I can get the pixel location of the object on RGB image and use same pixel index to get depth from /camera/depth_registered/image. The next step will be using this depth measure (and kinect parameters? I don't know) to get object's horizontal and vertical distance from kinect, which I don't know how to do. I did some search and people said I need kinect calibration parameters from openni, but I didn't find them. Another method could be finding the focal pixel size in mm, I don't know how to do it either.
Is it possible to get object location using kinect depth? If so, can anyone show me how to do it?

BTW, I know I can use point cloud to get xyz of interested pixel, but it's slower compared to depth output.

Thank!
Mao
Reply | Threaded
Open this post in threaded view
|

Re: How can I get xyz co-ordinate of an pixel using kinect depth output?

matlabbe
Administrator
Hi Mao,

you need the CameraInfo message published by openni to get calibration parameters (e.g., focal length). Create a PinholeCameraModel from the CameraInfo, then you can use projectPixelTo3dRay:
cv::Point3d image_geometry::PinholeCameraModel::projectPixelTo3dRay 	( 	const cv::Point2d &  	uv_rect	) 	const
To get actual 3D point from the camera, multiply the returned 3D vector by the depth value.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: How can I get xyz co-ordinate of an pixel using kinect depth output?

maogic
Thanks Mathieu! I got it works using your method.