RGBD+2D lidar, same lanuch file, different view_frames results

classic Classic list List threaded Threaded
6 messages Options
Reply | Threaded
Open this post in threaded view
|

RGBD+2D lidar, same lanuch file, different view_frames results

You Li
Hi Mathieu, thank you very much for the amazing work! I am trying to integrate Hector_slam with RTABMAP. What I did was to revise your "demo_hector_mapping.launch" file, and replaced the rgb and depth topics to what I have. The device I used was Kinect v2 and Rplidar A2.

I noticed that for the example dataset you provided ("demo_mapping.bag"), the frames are like this:


However, when I used my "demo_hector_mapping.launch" file (everything is the same except the specific topics), and run my own data, the frames are like this :


My questions is why mine is different so much than yours? Is there any other parameters I needs to set other than the  "demo_hector_mapping.launch" file?

Maybe there is something to note for the data collection? If it is, could you provide me how did you collect your RGBD data as well as the lidar data? Thanks.

Here is my revised "demo_hector_mapping.launch" file:

demo_hector_mapping_new_01.launch

Here is the information of your "demo_mapping.bag" file:


and here is that of my data:


Thanks,
You Li
Reply | Threaded
Open this post in threaded view
|

Re: RGBD+2D lidar, same lanuch file, different view_frames results

You Li
I noticed that when running the example dateset "demo_mapping.bag", there are maybe 20 frames, whose relations are well organized. Where did we need to organize their relations? They are not provided in the demo_hector_mapping.launch file.
Thanks.
Reply | Threaded
Open this post in threaded view
|

Re: RGBD+2D lidar, same lanuch file, different view_frames results

You Li
In reply to this post by You Li
Here is the result using the following .launch file:
demo_hector_mapping_v2.launch



I am sure the Hector_slam algorithm can work on my laptop. However, probably the solution form Hector_slam were not used in this data processing, as if I blocked the Lidar data, the mapping result (with only Kinect v2) would be the same. Why did this happen? I guess it is because of the frames?

Here are some warnings about my frames:


Here is the view of my frames:
Reply | Threaded
Open this post in threaded view
|

Re: RGBD+2D lidar, same lanuch file, different view_frames results

You Li
In reply to this post by You Li
So how should I adjust my configuration, to assure that the information form hector_slam is used in the data processing? Thanks
Reply | Threaded
Open this post in threaded view
|

Re: RGBD+2D lidar, same lanuch file, different view_frames results

You Li
In reply to this post by You Li
Hi,

I have modified my launch file, to the follows:
demo_hector_mapping_v4.launch
demo_robot_mapping_v1.rviz

and now the frames are


Seems makes more sense?

However, the mapping result was not good.
Here is a video of data processing:
https://www.dropbox.com/s/exeb4j0xpv2muyn/25.ogv?dl=0

and here is my data:
https://www.dropbox.com/s/zlcoh7r2gvevzz0/slam_025.bag.tar.gz?dl=0

If you would not like to download the video, here is a figure,



Reply | Threaded
Open this post in threaded view
|

Re: RGBD+2D lidar, same lanuch file, different view_frames results

You Li
This post was updated on .