This post was updated on .
Hi Mathieu
Thought I'd start a new thread after some scanning time. With some practice and methods of scanning I've improved my handheld scanning quite well. I love the multiple mapping procedure that you developed! Works well by registering the various clouds and also saves time by not having to do this manually! My findings so far is various areas of bending of the clouds when stitched together by the multiple mapping procedure. I believe that I need to understand the calibration methods better in order to calibrate the sensor better. My biggest challenge at this point is the noise of the cloud with walls as this is my primary element on my scans that I require to have good accuracy. My clouds sometimes have noise of up to 70mm at some areas and this creates havoc when I do a large area. This type of error can accumulate large errors in the overall As-Built environment of the floorplan. It appears to me that by meshing the scan it solves it somewhat but to what degree of accuracy? What I'm after is to generate the pointcloud of the walls with a lot less noise prior to meshing. I there a parameter that one can set to better align frames to each other maybe? I think the filtering parameters may play a role in this but I'm struggling to understand the different settings of these. With all this said and regardless of my struggles, do you think that this technology of the RGB-D sensors combined with 3D scan software may just disrupt the conventional terrestrial LiDAR scanners of today? Reason I'm asking is that I'm looking into purchasing a LiDAR scanner soon, but heck, they cost a 100 times more that the RGB-D technology. I had a look at your video of the cottage scan with a phone. I think you did it with Phab2Pro if I'm not mistaking? And I was in awe about it! What is your take on the Intel ZR300 camera with RTAB-MAP? You think it will improve on my current setup? |
Administrator
|
Hi Tertius,
Did you try setting maximum depth of the sensor when rendering the clouds (Preferences->3D Rendering->Max Depth)? With standard RGB-D cameras, setting to 4 meters would limit the cloud noise to about 1 to 2 cm. Calibration/registration errors would make the map bending (drift). Longer range sensors would give better results, or using an IMU for rotation estimation can help too (not supported in RTAB-Map standalone, only in ROS). You may get good tracking results with the ZR300. However, it is not as easy to use than a Tango-enabled phone. For example, the last time I tried the visual inertial odometry approach included in realsense SDK, the results drifted a lot more than Tango and even compared to using visual odometry only by RTAB-Map (tested only indoor though). It could be interesting to see ROVIO (Robust Visual Inertial Odometry) approach with the ZR300 like in this video: https://www.youtube.com/watch?v=CGnIx7isVJg (their code/calibration procedure can be found here). Note also that 3D reconstruction may not be as good as a real RGB-D camera (realsense is a stereo camera in IR) farther than 2-3 meters. However, a big advantage of ZR300 is that it can work outdoor. cheers, Mathieu |
Hi Mathieu
Yes, I've adjusted the depth you mentioned but I still found some thick noise areas. Do you reckon it could be that the clean wall itself is maybe causing this? I found that object in the same frame of the wall comes out beautifully shaped and detailed but the wall surface itself causes havoc. I'm also wondering if artificial light doesn't maybe play a role on this. What I mean is the different shading on the wall caused by ceiling light reflections or something? Dunno, might be wrong though! It's worrying that you mentioned that you experienced increased drift with the ZR300 and this renders the purchase of the camera a no-go for me. Is a shame, as I was hoping that it could've been a one-stop item for a user like me. (I could've only gotten the help from my brother to get the ROS package written and off I go then) But alas, I came across a company here in South Africa that could maybe assist me with implementing one of their IMU systems after speaking to them about the Velodyne LiDAR units that they utilize. They develop high spec vehicle automation electronics, so maybe they can assist. Well, if it won't cost me a arm and leg to develop in the end! Damn, I like that ROVIO video you linked! That does look promising! The mere fact that it can track with that erratic movement! I think, might be mistaking, the new version of Canvas from Occipital Structure Sensor has more or less the same capability of erratic movement tracking as well. Could've been some other system also, can't remember, I've seen so many videos and systems lately. You don't possibly have a dataset to share of a smallish indoor area that you scanned with one of the Tango enabled phones that can be imported to the standalone RTAB-MAP for me to have a look at the quality of elements? Like I mentioned previously, that video of you scanning with the cottage with a phone does intrigue me a lot! And maybe, like you mentioned in my previous thread as well, this might just be my underlining answer to it all. My concern is though, how far can you push the size of a scan on these phones? I survey large scale commercial and retail buildings and I can only think that the scan file must be massive! One could break it up into smaller scans I assume to register them manually afterwards. I got hold of a Realsense R200 sensor to test. I'll be picking it up later this week. Will compare it to my Xtion and give my findings later. |
Administrator
|
Hi,
Beware of reflections, they can add some bad points hard to filter automatically. I didn't have experience of kinect-like camera having difficulties with artificial light (but with sunlight yes off course). The kinect v2 (which uses Time-Of-Flight) gives sometimes strange points close to camera when looking in empty space with reflective objects far from the camera (it is like the beam is reflected and detected in the next frame, which makes the kinect think that it hits something very close to camera). Beside that, the RGB camera exposure should not influence the geometry, only texture color. For ZR300, what I referred is if we use VIO approach of realsense, we get results worst than VIO approach used in Tango. So from a user perspective, tango is more appealing to get the most robust odometry approach. Note that we can still use ZR300 in RTAB-Map as usual, which would give results similar to other cameras. You mentioned R200 at the end, ZR300 would be better than R200 as the max range is higher. You can actually download the cottage databases here: https://github.com/introlab/rtabmap/wiki/Multi-Session-Mapping-with-RTAB-Map-Tango (see MultiSessionTango.zip). The databases can be opened in RTAB-Map. For large commercial space, the problem with a camera-based mapping approach is that because of the limited field of view, it will require a lot of time to just scan the area. You will also have to be more cautious to scan in a way to find loop closures. Velodyne-based mapping (though a lot more expensive) is generally a lot faster to map large areas as they have high range (>100 meters) with great accuracy and they have 360 field of view. cheers, Mathieu |
Damn, I just deleted my whole reply by accidentally closing my browser!
Anyway, to sum up my reply: I do avoid reflective surfaces completely as I know that lasers don't like it. I think that the issue comes in when these walls are scanned a few times by passing past them in the frames at different distances so I'm doing my best to avoid scanning the same areas at different distances and seems to work. I've also set the cloud ditance to 3.5m to assist in this as well. Got hold of a LORD Microstrain IMU to borrow, so I'm going to try to get my brother (he's a developer) to help my with the RTAB-MAP ROS package build. Is there a tutorial maybe as to how to build the software to read the IMU for odometry and also to use loop closure to assist with drift? ZR300 better sensor than my Xtion? If so, I would consider in purchasing it. I'm okay with distance factor. I"ll scan closer to areas if it can increase my accuracy! |
Administrator
|
Hi Tertius,
There is no official way to integrate IMU in rtabmap, it can be done indirectly with robot_localization package for a loosely sensor fusion. See sensor_fusion.launch for example. It requires quite some knowledge of pose estimation algorithm to correctly understand. Another approach would be to use a visual inertial odometry approach like ROVIO, then connect its odometry output as input odometry of rtabmap node. I don't have a comparison between these sensors. My first thought is you would get better motion estimation with ZR300, but better 3D reconstruction with Xtion for indoor environments. cheers, Mathieu |
Free forum by Nabble | Edit this page |