INTRODUCTION
First of all I want to thank to Mathieu for the astonishing contribution you're giving at the community. Your efforts are really impressive! I would like to write this post in detail, so to help as much people as possible who could be working with a scenario similar to mine. HARDWARE CONFIGURATION ROBOT PLATFORM : 2 wheeled differential drive robot RGBD SENSOR : Stereolabs ZED Mini Camera LIDAR SENSOR : Quanergy 3D Lidar (360 FOV) IMU SOURCES : Standalone central unit / ZED Mini embedded IMU GPS SENSOR COMPUTING PLATFORM 2 Machines ROS-based network COMMUNICATION LINK : Local Ethernet Network RGBD PROCESSING UNIT : NVIDIA JETSON TX1 [ L4T (JETPACK 3.3) / ZED SDK 2.7.1 / ROS Kinetic ] ROS MASTER : Lenovo Thinkpad Laptop Intel i5-5300U CPU @ 2.30GHz / 8GB DDR3 RAM [Linux Ubuntu 16.04 / ROS Kinetic] SOFTWARE CONFIGURATION Jetson TX1 is used for image/depth processing only [ Input : raw image streams - IMU / Output : RGB + Depth Map - Odometry (published topics)] RTABMAP: Version 0.18.0 [ROS Node Location : Laptop / Standalone Location : Laptop] ZED SDK : Version 2.7.1 [Installation Location : Jetson TX1] ZED ROS WRAPPER : [ ROS Node Location : Jetson TX1] LIDAR DRIVER NODE : [ROS Node Location : Laptop] PROJECT'S GOAL A. Record data navigating an urban outdoor enviroment [~1Km start>finish path - Few or absent loop closures] [ 2D Occupancy grid map required for further navigation purposes]. B. Process the database offline to retrieve the most accurate map not subjected to online processing constraints. C. Use the created map for navigation purposes exploiting appearance based localization and possibly Visual Odometry Fusion. QUESTIONS 1. RECORDING DATA Recorded topics : RGB Image stream - Depth Map - Lidar 2D Laser Scan - ZED Camera IMU Odometry - Robot Wheel Odometry - GPS a.Given an offline map creation, am I still subjected to any quality/performances trade-off? It is possible to store raw data at maximum quality/precision/input rate? b.My intended pipeline is : Record ROS BAG file at maximum quality [RTABMAP OFF]> Process the bag by means of rtabmap-dataRecorder [OFFLINE] > Process the resulting database by means of rtabmap standalone and configure parameters offline. Is that feasible? c.Is there any threshold (regarding input rate, quality etc) above which, a raw data quality improvement would results in an useless overkill? 2.PROCESSING THE DATABASE OFFLINE Given the pipeline feasibility, I'm wondering if I'll be able to fully tune rtabmap parameters offline so to get the best accuracy I'd like to obtain. (Given a good odometry performance). 3.LOCALIZATION MODE I've aware about the integration of movbase within the rtabmap framework. I am wondering if it is possible to run rtabmap in localization mode during navigation within the created map while having multiple localization sources (e.g. lidar based montecarlo localization - appearance based rtabmap localization). I'm interested in evaluating the performances of the different localization approaches. Thank You in advance for your support!! |
Administrator
|
This post was updated on .
Hi,
It is an interesting setup. Is the Quanergy LiDAR gives a point cloud similar to Velodyne? If so, in your recording topics, you could record 3D point clouds instead of 2d laser scan if you want to keep all data. You have both wheel odometry and zed odom, you would have to use only one or combine both with robot_localization package. For your questions: 1. RECORDING DATA Yes, it is possible. You have the choice to record a rosbag, which would give you the most flexibility if you want to try other ROS approaches, or record data in RTAB-Map's database format directly using data_recorder.launch. When recorded in RTAB-Map's database format, we can reprocess with different parameters using rtabmap standalone (use recorded database as input source) or rtabmap-reprocess tool (which can be reprocessed faster than real-time). With the rosbag, it is like replaying online the experiment. For this reason, it is maybe most useful to record the rosbag with everything at first, so that you can test different rtabmap input configurations afterwards. Note about data_recorder.launch: If you want to use it, I suggest to use rtabmap_ros/rgbd_sync nodelet to pre-synchronized depth/rgb images together before feeding it to recorder node. Example: <launch> <node pkg="nodelet" type="nodelet" name="rgbd_sync" args="standalone rtabmap_ros/rgbd_sync"> <remap from="rgb/image" to="/zed/rgb/image_rect_color"/> <remap from="depth/image" to="/zed/depth/depth_registered"/> <remap from="rgb/camera_info" to="/zed/rgb/camera_info"/> <param name="approx_sync" value="false"/> </node> <include file="$(find rtabmap_ros)/launch/data_recorder.launch"> <arg name="subscribe_odometry" value="true"/> <arg name="subscribe_scan_cloud" value="true"/> <arg name="subscribe_depth" value="false"/> <arg name="subscribe_rgbd" value="true"/> <arg name="rgbd_topic" value="/rgbd_image" /> <arg name="scan_cloud_topic" value="/quanergy_cloud"/> <arg name="odom_topic" value="/zed_vio_odom"/> <!-- other choices: /wheel_odom or /odom_combined (robot_localization) --> <arg name="frame_id" value="base_link"/> <arg name="max_rate" value="1"/> <!-- 0 = Record as fast as possible --> </include> </launch>Note that a compatible odometry covariance with RTAB-Map is very important. To use arbitrarily fixed values, you would have to set "odom_frame_id" to use TF instead. data_recorder.launch lacks of arguments to set the covariance value, you would have to edit the file to add under data_recorder node something like: <arg name="odom_tf_linear_variance" default="0.01"/> <arg name="odom_tf_angular_variance" default="0.005"/> As discussed in previous point, doing "Record ROS BAG file at maximum quality [RTABMAP OFF]" is a good way to start. After processing one time with rtabmap, you will get a database. This database can be used as input in standalone app to tune parameters offline and replay the database at different frame rate (like faster than real-time for convenience). It is more a hard drive space issue. Recording all raw data at maximum frame rate can give you very big rosbags. If odometry is already computed, RTAB-Map by default requires only 1 Hz data. The recording approach using data_recorder.launch would give a lot smaller files if rate is limited to 1 Hz for example. 2.PROCESSING THE DATABASE OFFLINE Yes, we can change all parameters offline. 3.LOCALIZATION MODE If you are creating a 3D map, it could be possible to make rtabmap just publishing the OctoMap for a lidar-based localization approach using the OctoMap while disabling appearance based rtabmap localization (Kp/MaxFeatures=-1). For example, the other localization approach would publish /map -> /odom. Make sure to set publish_tf to false for rtabmap so that /map -> /odom is not published by multiple nodes. To use another appearance-based localization approach, we would need more info about the format that is used. For example, it is possible to export a visual map in bundler format (see this post) and use ACG_localizer. cheers, Mathieu |
This post was updated on .
Thank You Mathieu for your support!
I've tried to figure out how RTAB Localization mode should work. I'm not sure if I completely understood what is going on. Could You assure me that I got it right? So, Localization mode is based on the same algorithm which is used for loop closure detection during mapping. Therefore when I move the robot in an already mapped enviroment, the RGB-D images streamed by the camera are used for appearance based place recognition. This is done through the bag of visual words technique. [Here I am not sure how the depth information comes into play.. 1.Every keyframe's feature stored in the map has the info of its depth? 2.Is the depth of the features used for estimating the pose of the camera through SFM? 3.The point cloud should not be necessary when localizing through RGB visual words approach. Is it true anyway that the pc can be exploited to refine the pose estimate by ICP processing?] The localization output is a transform map->odom, so that it should estimate the position of the robot within the map. Given your experiences, what should I expect? (if good overall conditions are met) A. A reliable "continuous" global localization. B. Just a "discrete" global localization useful just for "localization recovery" in case of other locating methods failures (I'm thinking about kidnapped robot occurrences). Can I use the appearance based localization to locate the robot within a 2D occupancy map? Excuse me for my naive questions..I'm quite a newbie in this field. Thank you again for your support Cheers |
Administrator
|
Hi,
1.Every keyframe's feature stored in the map has the info of its depth? Yes 2.Is the depth of the features used for estimating the pose of the camera through SFM? It depends, the pose estimation is done by PNP which is 2D <-> 3D estimation. For localization, the depth is not required. However, for the map, the depth of the features must be known. 3.The point cloud should not be necessary when localizing through RGB visual words approach. Is it true anyway that the pc can be exploited to refine the pose estimate by ICP processing?] The point cloud is not necessary for appearance-based localization. However, if a laser scan is available, it can be used to refine the transformation with ICP. You will have more details in the paper "RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation". Can I use the appearance based localization to locate the robot within a 2D occupancy map? Using rtabmap yes, rtabmap keeps the 3D features even if you output a 2d occupancy grid. For localization, as by default rtabmap is updated at 1 Hz, you will get discrete jumps for the /map -> /odom transform. It is based on the standard, see REP105. Localizations may not happen at every frames, so the robot may "jump" when relocalized to correct the odometry drift. cheers, Mathieu |
In reply to this post by DavideR
@Davide are you making delivery robots? Have you tried to do the positioning without lidars, only vSLAM?
|
Hi @nathgilson! Yes, I'm working on a project related to last mile delivery robots. I'm developing a master thesis about stereo vision and autonomous navigation. The goal is to address the tradeoffs regarding the two main technologies for localization ( Lidars and RGBD ). I'm currently working on it and still have to start the experimental session.
My first impression is that it would be possible to do autonomous navigation without lidars provided a serie of constraints which are valid for an urban navigation ( e.g. navigation on sidewalks ). I will keep you updated. cheers |
@DavideR we are working almost on the same problem ATM. I am part of the rb2.io team (spin-out from the Swiss Federal Institute), and we are working with a Berkeley-based startup that makes delivery robots on Campuses.
The delivery robots market seems very promising, but today it is limited by the price of Lidars. We believe RGBD sensors are a great alternative, but as you may have noticed, they are quite limited by brightness and weather changes outdoors. In what university are you doing your master project? Do you have a mail or a telegram so I can contact you privately? Cheers |
In reply to this post by matlabbe
Hi Mathieu,
I'm working on RTABmap trying to create a map ( testing it in an indoor enviroment ). Differently as written in the first post, at the moment rtabmap_ros runs directly on the Jetson TX1 ( so that both the ZED nodes and rtabmap share the same CPU avoiding image stream transmission over the network ) At first I tried recording ROSbags. Now all the process runs in real time. QUESTION 1. I encountered this problem first looking at the database recorded from a ROSbag. Then I found the same issue in a database recorded directly during a real time session. Even looking at rtabmapviz I encounter the same situation : In the "Odometry" screen, the depth image is shown randomly, and most of the times there is just the rgb image without any depth registered. Nevertheless, the cloud is continuosly updated in a correct way. This results in database keyframes with no depth image associated. It seems also that the depth image appears only when the camera is close to walls or particular objects. Anyway, by looking at the stream delivered by the zed node , it seems that the depth topic is continuosly published and is correct. I'm sorry for not posting other details regarding the launch file nor the parameters, I'm not able to retrieve them now. ( Will post soon if required ). I was just wondering if it is a known problem. Thank you for your support! |
This post was updated on .
I sent you a message via telegram.
Nathan @rb2.io +41 76 22 555 76 Automation systems |
Administrator
|
In reply to this post by DavideR
Hi Davide,
If the 3D cloud is correctly created, it means the depth image is ok. I think the problem comes from the rendering in the UI. I have seen this problem before with the ZED stream. There are maybe large depth values that make everything black because of the grey color scaling. I opened an issue. cheers, Mathieu |
This post was updated on .
Hi Mathieu,
I confirm that the problem is fixed.Thank You! I've succesfully created a map and I'm able to localize the robot within. QUESTIONS I have few question for my next steps : 1. I can choose between : A. grab 720 HD images with MEDIUM [2] depth quality B. grab VGA images with ULTRA [4] depth quality Which tradeoff is better for a more robust loop closure detection? What is more important for feature processing : Image resolution or depth accuracy? 2. ICP registration : 3D Lidar capabilities [ 360° FOV - 8 scan rings - range >50m ] . What is the best configuration for a good ICP refining? I'd like to know what is better betweeen : A. processing the cloud directly ( what is a good downsampling value? [ range - voxel size - floor removal..] ) B. creating a scan through the package pointcloud_to_laserscan ( which reference frame should I use to create the scan [base_link, camera_frame, lidar_frame] is it ok if they pitch? ) 3. I want to figure out how GPS fixes are currently exploited by RTABmap, could you briefly describe me which functionalities are actually viable? Thank You a lot cheers, Davide |
Administrator
|
Hi Davide,
1. Depth accuracy for motion estimation. For loop closure detection, if you use scaled features like SURF/SIFT/ORB, the higher resolution is the better (you may have to tune their parameters on large resolutions), however more computation time is required. 2. If you are planing to do 2D mapping, you can use the middle ring of the lidar assuming the lidar is parallel to ground. For 3D mapping, use the full point cloud. With a low number of rings, you may increase the voxel filter size (0.1 to 0.5 meters). If you build rtabmap with libpointmatcher, ICP will be a lot better in particular with 3D point clouds. I would refer you to this paper, look the configuration used for the KITTI dataset (there are the ICP parameters used in different configurations of the paper). I am still working on a launch file for the VLP16, that could be useful for an example with that kind of lidar. Hope I will be able to publish it on rtabmap_ros repo in the next week or so. 3. GPS can be saved in the nodes of the graph. For visualization, they are used to align the graph accordingly to ENU coodinates and to export poses in GPS coordinates. They are used to reject bad loop closure detections. If Optimizer/PriorsIgnored is false (default true), they can be used during the graph optimization (this is however very experimental with g2o and GTSAM optimizer, as it is not always working, still an open issue). cheers, Mathieu |
Hey Mathieu!
I have questions about localization mode. 1. I mapped the area forcing a 3 DOF registration so that my graph lies on an unique plane. On contrary, when I run rtabmap in localization mode (forcing 3 DOF), the transform /map->/odom evaluates all the 6 DOF anyway. My robot pitches and the pose is estimated correctly, still I'd like to mantain the transformation on the ground plane ignoring z,pitch and roll values (otherwise my robot starts to fly :D ). Am I doing something wrong or it is supposed to work in that way? 2. If everything is as expected what could be a nice workaround? how can I filter the /map->/odom transform? 3. Why then the database graph gets modified even if localization mode is activated? Thank You for your patience! cheers |
Administrator
|
Hi,
1. Doing localization mode doesn't necessary mean 2D. Reg/Force3DoF should be set to true. If it is true and /map->/odom is 6DoF, there is a bug. 2. A small database sample could help to better see your system. 3. The graph should not be get modified in localization mode, unless memory management is enabled. cheers, Mathieu |
Free forum by Nabble | Edit this page |