Hi,
I have been scanning quite a lot through the app, and one issue seems to be consistantly there: the floor is very noisy, way more than the walls and the ceiling. This can be seen (hopefully well enough) in the following picture It's almost like the floor did not exist in the eyes of the sensor. Although some of the areas I'm scanning are wet, this issue still arises when the floor is dry. I find it very hard to filter the point cloud accordingly, and it makes the meshing process quite challenging. I haven't seen this problem mentionned on the forum or on GitHub before, but I'd like to know if this is something people also have been dealing with, and if it's fixable with the correct parameters. I still have this issue after detecting more loops and bundle adjustment. Thank you for you help Pierre |
Administrator
|
Hi Pierre,
Are you use an iPhone/iPad with LiDAR? I saw in your previous scan that depth images were always perfectly dense, but have some distortions. In the App's Settings, make sure to use high depth confidence to keep only the most accurate points. With low confidence, if the ground is not that textured, ARKit will try to interpolate depth, causing a wavy look. cheers, Mathieu |
This post was updated on .
Hi Mathieu,
Thanks for your answer. Yes I'm using an iPhone and I have low depth confidence. This was convenient for me because both the left and right walls would be scanned nicely as I would just walk straight. But I guess I'll have to change then. About distorsion and depth confidence, is this something that I can adjust now or it has to be tweaked before scanning ? Thanks Best Pierre EDIT: I would like to add that if I generate the final point cloud from "laser scans" instead of RGB-D, then the result is really clean and not noisy, although very sparse. Only a few nodes have an associated laser scan. I'm confused: what is this exactly ? |
Administrator
|
Hi Pierre,
Unfortunately, we don't keep the confidence image in the database, so we cannot change it in post-processing. For the "laser scans" in the database, it is a "debug hack" to save raw ARKit tracked visual features in the database. The real laser scan is contained already in the depth image (with high confidence pixels) when we use a iPhone/iPad with LiDAR. If the iPhone/iPad doesn't have a LiDAR, I am not sure what high confidence would mean. I think when we don't move, ARKit doesn't return any tracked features, so it may be why you see some nodes without points. Regards, Mathieu |
Hi Mathieu,
Thanks for the answer. Would it be a sensible feature request that the confidence map is kept in the database for ARKit ? I see a lot of reasons to do so, maybe I'm missing the reasons not to do it ? Cheers Pierre |
This post was updated on .
In reply to this post by matlabbe
It'd definitely be nice if the confidence data could be saved as well. Having stumbled into this myself, it's somewhat unfortunate that the iOS app's depth confidence setting is bundled together with the "Rendering" settings. The other parameters in that group (density, max depth, min depth) are all configurable in when reprocessing a scan, and the description below the settings group about re-opening the map to change the rendered cloud depth can reinforce the impression that the whole group is configurable in post.
|
Administrator
|
In reply to this post by Pierre
Hi Pierre,
There is no API to save more image types than GrayScale, RGB and Depth images. Other types discussed in the past are IR (at same time than RGB), disparity, thermal or object/class/id segmented images. Depth/Disparity confidence image could be another type. A workaround was to use user_data field of SensorData class: https://answers.ros.org/question/211785/how-to-generate-thermal-colored-map/ . With current iOS code, that could be possible to populate the user_data with a compressed confidence image, instead of thresholding depth values. However, even if that added code could be only a dozen of lines, there would be no option to actually use it in RTAB-Map post-processing tools (because it is user data, only from external an user would know that do to with the data saved in the database). I can see that android has also its way to set a confidence (though not used), which could be also interesting to support. One major advantage to keep confidence image could be that for loop closure detection or visual odometry, we could use low confidence to have more denser depth, but for geometric 3D reconstruction, we would want to use high confidence. But to do that, we cannot use user_data, a new feature needs to be implemented. I created an issue about that. However, that is a relatively large change if we do it correctly. Regards, Mathieu |
Free forum by Nabble | Edit this page |