Different cameras for depth image and RGB image?

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|

Different cameras for depth image and RGB image?

jseng
Hi, I was wondering if it is feasible to use different cameras for the RGB image and the depth image.  I am thinking of using the depth image (720P) from a ZED and the RGB image from a wide angle lens camera.  I have encoders on the robot I am using.

From what I understand the RGB image is used for loop closure and the depth for proximity detection?  I only need a 2D occupancy grid and would do obstacle detection using the depth image.
Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

matlabbe
Administrator
Hi,

I would still use RGB frame from the ZED camera. It is true that only RGB image is required for loop closure detection, but after a loop closure is detected, a transformation must be computed, thus a registered depth image (or right image in case of stereo) with RGB image is required to get the 3D positions of the visual features. Proximity detection is done with RGB+depth images too, and lidar if available.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

jseng
Thanks for the reply!  I had another question regarding loop closure, when RTABMAP is mapping and I watch incoming image (with the green, yellow, and red dots drawn), those dots are only drawn on locations close to the camera.  Is this because I have turned down the range on the ZED?  It seem like none of the far range pixels are used in loop closure.
Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

matlabbe
Administrator
Hi,

By default features without valid depth are not extracted (Mem/DepthAsMask=true). If you show the depth image (right-click menu), you will see that features are not extracted where there is no depth. You can set Mem/DepthAsMask to false to use far features.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

jseng
Thanks for the reply.  I will look into that setting.

Another question - I have been building the database offline using ros .bag files I recorded.  Once I build the multi-session database, I remove all the depth images to make the database smaller.  The loop closure still seems to work without the depth images in the database.  Do you see any problems with running this way?  Could I remove the RGB images as well?  Thanks so much for your help!
Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

matlabbe
Administrator
Hi,

You can remove RGB and depth images from the database after it has been created. A new mapping session or new localization session should still be able to find loop closures and localize on that previous map. Only features are required for localization and loop closure detection.

For convenience, you can disable the saving of raw RGB and depth images in the database directly. Set parameter Mem/BinDataKept to false. However, the disadvantages of not recording raw RGB and depth images in the database are that it may be more difficult to debug afterwards (with rtabmap-databaseViewer) and that we cannot recover it (with rtabmap-recovery tool) if the system crashes. So removing RGB and depth images after rtabmap has safely closed would be safer.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

jseng
Thank you for your replies.  I have another question regarding a situation with my mapping.  I am using my ZED camera along an outdoor building.  On the left side are bushes and on the right side is a brick wall.  When I look at the features that are extracted, they are all from the bushes (and nothing from the brick wall).  I think this is reducing the number of loop closures I get along this section.

I have attached a picture and all the features are highlighted.  I think the features are selected by response magnitude?  I am trying to think of a way to distribute them in the image.  Thank you for your help!



Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

matlabbe
Administrator
Hi,

Are you using default GFTT/BRIEF detector (Kp/DetectorStrategy=6)? If so, we can tune GFTT parameters to extract more uniformly in high resolution images having high contrast.
- GFTT/BlockSize (default 3): can be increased to extract larger features
- GFTT/MinDistance (default 3): can be increase to 7-10 to extract features everywhere
- GFTT/QualityLevel (default 0.001): can be decreased to extract more features in darker areas (with lower contrast)

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: Different cameras for depth image and RGB image?

jseng
I am using GFTT+ORB, but those GFTT settings are making a big difference in where the features are being extracted.  They are now spread out over the image.  Thanks so much for the help!