Need some help with Mapping and Azure Kinect

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Need some help with Mapping and Azure Kinect

Janus
This post was updated on .
Hello,

i am a newbie, a first time user of RTAB-MAP and ROS.
But after days of trouble shooting and reading forum messages a think it's time to just ask.

First of all, i think it is good to explain my goal.
I want to make as good (as it gets with a Azure Kinect) as possible pointclouds, from rooms/houses/buildings and maybe some scans of accutools.

What i want from that is:
1. 2d plans (with angles between walls) and height.
2. Measure some random things (at home) like windows (frames).
3. And if thats working allright, use software (with recognition) to make a 3d file/project from the
    cloudpoints (for VR and or designing and building cabinets)

I know that ROS and RTAB-Map is more designed for robot navigation, but if all goes well, i want to build a robot for scanning/mapping. Number one priority for now, is quality of the pointcloud.

So a couple of questions and a problem:

1. The most important one: can RTAB-Map deliver good enough pointclouds for my goals? (icm Azure
    Kinect)
2. What are the device specific settings that i can alter for best results of points?
    Maybe for the kinect the field of view, min and max distance because of distortion, or is that all (better)
    handled by ROS/RTAB-MAP?
3. If nr. 1 is Yes; what is the best way to do this. Set decimation of the points as low as possible while
    scanning, or do that afterwards? I also could not find a clear answer if RTAB-Map saves all info to make
    an as good as possible pointcloud, or is a lot of information gone after decimation and filtering?
   
    I believe i saw all photos from a scan, but depth images where in another format as extracted from
    Azure Recorder's mkv. So does it contain all info, and if ROS or RTAB-Map isnt build / cannot produce
    best possible pointcloud, can other software handle this format depth images?


Till now i can only produce i pointcloud from what i believe is always the last viewing point.
First i thought it had to do with not loading all pointclouds. But now i think it has to do with not getting any loopclosures.
I filmed it with hand so i think that has to do with to fast movement, and i also did sometimes point it upwards for filming the cealing.
But even then, i think there are long enough parts that should have more info for a larger pointcloud than just the end scene (last picture).
I dont mind my pc is calculating for days to get it. Just curious of that is possible, or if i have to film it more slowly and carefully.

In another forum question i saw that rtab-map is saving/using only 1 fps for its map and also that 30 fps is way to fast for it.
Is that true with all graphicscards? (maybe only an answer on that particular question)
Because i ordered an Jetson AGX to replace the LattePanda Alpha which i used for the first scans, and the Jetson should handle a lot more than the Intel graphics in the LattePanda.

Then my last questions:
How can i check if not getting enough loopclosures (or not enough inliers), has or has not to do with wrong calibration? I've read that the kinect is calibrated in factory, and i have seen the calibration file in mkv, but i want to be sure of that before trying other things. I also don't know if the output of the videostreams from the kinect is synced, or if the software is suppose to handle the correction, and if its the last one, does the driver handle that, or ROS/RTAB-MAP?

I also saw something in Odometry (advanced) settings about estimations with a second run, which should be better and more accurate if i just play it over and over again right? Can i use that to maybe have more loopclosures, or atleast to know that there aren't anymore? I dont even know if i have any, first i thought that the green flash (not yellow) and or picture meant a loopclosure, but now i am not so sure anymore. Else cloud export (with regenerate) would use that for cloudpoint instead of last frame.

Is there a setting in RTAB-MAP that tries all possible combination / settings / techniques or at least some (input from db, or mkv), if needed at lowest speed, that can find the best setting for you? (pointcloud/more loopclosures)
I did try playing db at 0,1hz (recording to new one) and several other settings, but not with the results i hoped. But i am a newby, so some/most settings could be wrong



Reply | Threaded
Open this post in threaded view
|

Re: Need some help with Mapping and Azure Kinect

matlabbe
Administrator