Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
10 posts
|
Hello.
I have succeeded in Visual SLAM and ICP SLAM with L515. As a next step, I would like to do SLAM by getting RGB and Depth from different cameras respectively. As a test, I set the Depth to use the one from the other camera, but the SLAM did not start. (The input-output types and the topic names are the same.) Is it possible to do this? Thanks in advance. |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
4446 posts
|
Hi,
it is possible. What are the warnings? I suspect that you should set approx_sync argument to true. Note that even if the topics are taken by rtabmap, it doesn't mean it will work. TF between the cameras should be accurate and your depth image should be transformed in color camera frame (depth registration) prior to send it to rtabmap. Once Tf between the cameras is known, you may use rtabmap_ros/pointcloud_to_depthimage node to create the registered depth. IF your depth sensor doesn't provide a point cloud, you can use rtabmap_ros/point_cloud_xyz to make one. cheers, Mathieu |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
10 posts
|
Thank you for your solution.
After trying many things, I succeeded in starting the RGBD odometry. However, immediately after starting SLAM, the screen turns red and I get the following warning on my terminal. ------------------------------------------------------------------------------------------------------------------- [ WARN] (2021-09-16 17:53:30.587) OdometryF2M.cpp:529::computeTransform() Registration failed: "Not enough inliers 0/20 (matches=8) between -1 and 2" (guess=xyz=0.000000,0.000000,0.000000 rpy=0.003163,-0.021590,0.001380) [ WARN] (2021-09-16 17:53:30.587) OdometryF2M.cpp:290::computeTransform() Failed to find a transformation with the provided guess (xyz=0.000000,0.000000,0.000000 rpy=0.003163,-0.021590,0.001380), trying again without a guess. [ WARN] (2021-09-16 17:53:30.588) OdometryF2M.cpp:519::computeTransform() Trial with no guess still fail. [ WARN] (2021-09-16 17:53:30.588) OdometryF2M.cpp:529::computeTransform() Registration failed: "Not enough inliers 0/20 (matches=4) between -1 and 2" (guess=xyz=0.000000,0.000000,0.000000 rpy=0.003163,-0.021590,0.001380) ------------------------------------------------------------------------------------------------------------------- The rtabmap/odom seems to have a value in "orientation" only at the beginning, but after that it stays zero for a long time. Any advice would be greatly appreciated. Thank you in advance. |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
4446 posts
|
If the depth image is not registered to color camera, odometry will fail returning null poses (0,0,0,0 quaternion) notifying it cannot compute the pose.
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
10 posts
|
What do you mean by "the depth image is registered to the color camera"?
The RGB image (1920✕1080) is calibrated to match the depth image (100✕70), but that's not what you mean? Sorry, I'm new to this. |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
4446 posts
|
Hi,
By depth registration, it means to get 1 on 1 correspondence between color pixel and depth pixel. Examples on this page. Other resources: http://wiki.ros.org/openni_launch/Tutorials/ExtrinsicCalibrationExternal It could be done by doing a stereo calibration between color camera and IR stream of the depth camera. The above link does that. Once you get the 3x4 transformation matrix from the stereo calibration between the optical lens of the RGB camera and the optical lens of the depth camera, you can use rtabmap_ros/pointcloud_to_depthimage as explained in above post. cheers, Mathieu |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
10 posts
|
This post was updated on Oct 15, 2021; 12:41am.
Thank you for your answer.
I have investigated this for a long time. I am still able to successfully publish a calibrated RGB (100✕70) image with my own calibration node, which supports RGB images (1920✕1080) to depth images (100✕70). There is no significant discrepancy between the timestamps of each Msg (up to about 50ms). I'm thinking that too low resolution is a factor, but I doubt it. Thanks again for your help. |
Free forum by Nabble | Edit this page |