Hello Mathieu,
I have been using RTAB_Map quite successfully for my robot. Here is my setup and use case. Setup.Two Realsense cameras, facing backwards and forwards, used in stereo mode.RTAB-Map setup for using Two Stereo Cameras, with odometry from an EKF node. Using ROS2, ORB for feature extraction & FlannKDTree Use CaseDuring Mapping.Use RTAM_Map for generating 2D grid map RGBD/CreateOccupancyGrid=true During Localization. I want RTAM-Map to serve the map but not update the occupancy grid RGBD/CreateOccupancyGrid=false. and provide map->odom transformation. I am using the /map from RTAB_Map as the "static" map, I use the pointclouds generated by the cameras (after heavy filtering) & other sensors (TOF) for the local/dynamic map. Questionsa) RTAB_Map is updating the occupancy grid even when RGBD/CreateOccupancyGrid=false (localization mode), is there a another parameter I can use to prevent this ?b) Looking at the code, the ROS2 wrapper updates the database unconditionally with the updated Grid, it also seems to save the long term memory into the database. PerformanceThe /info message tells me that the most time is spent in "AddNewKeyword", then in "Keypoint3D".Is there anyway to speed up AddNewKeyword ? I can generate the ORB "Keywords" & "descriptors" & "Keypoint3D" externally when processing the raw image from the camera , synchronized with the image , could I just populate the key_points field in the RGBDImage.msg ? What would be your suggestion ? This could be useful if there are cameras that can generate these in hardware. The output while localizing is as follows. [INFO] [1727896464.827454487] [rtabmap]: rtabmap (5913): Rate=0.25s, Limit=0.000s, Conversion=0.0012s, RTAB-Map=0.1648s , Maps update=0.0128s pub=0.0425s delay=1.6614s (local map=1951, WM=1951) RTAM-Map itself does not take a lot of time (0.1648s) , the largest time is in "Maps" delay, not sure what that represents , is there a way to speed that up ? Once again thank you for this comprehensive package, it is as much fun exploring it as it is to build the robot :) Best Regards Sandeep |
Administrator
|
Hi Sandeep,
Do you have "map_always_update" set to true? If memory management is disabled, that would be the only reason why the occupancy grid is updated in Localization mode. rtabmap node would save all the time the optimized map to database. However, if there are no changes, the map should stay the same. The AddNewKeyword time refers to Vocabulary update. Reducing Kp/MaxFeatures may help to avoid quantizing too many visual features. However, in general it should be fairly fast, except when the vocabulary has to be re-balanced. You can set Kp/FlannRebalancingFactor=1 to avoid re-balancing. The Keypoint3D time may refer to stereo correspondences computation (between left and right cameras). This could be saved if your camera can provide already a depth image (then use RGB-D mode instead of Stereo mode), or if the keypoints 3D are already computed from the camera (to populate RGBDImage topic with). If you extract features (keypoint and descriptors) in the camera driver, you can populate RGBDImage topic to avoid re-extracting them in odometry or rtabmap nodes. For the high delay (1.6614s), it is computed by doing "ros::Time::now() - input data stamp". As rtabmap processing and map update would only add ~208 combined delay, there is a 1.453 delay coming from the input message synchronization. Depending on how topic_queue_size and sync_queue_size are configured, that could add more or less delay, as discussed in this post. For ROS2, there would a fix coming up. cheers, Mathieu |
Hi Mathiue,
Thank you for your feedback. I have tried the RGBD-Mode, the Stereo-Mode seems to be giving better accuracy. I did try using your latest GPU acceleration to speed things up, unfortunately it crashed in Stereo.cpp (198) --> d_pyrLK_sparse->calc(d_leftImage, d_rightImage, d_leftCorners, d_rightCorners, d_status); this is the error: terminate called after throwing an instance of 'cv::Exception' [component_container_mt-10] what(): OpenCV(4.9.0) /workspaces/opencv/opencv_contrib/modules/cudev/include/opencv2/cudev/ptr2d/texture.hpp:147: error: (-217:Gpu API call) invalid argument in function 'createTextureObject' Not sure how to debug this further , my hardware platform is Jetson ORIN-AGX 64GB, OpenCV 4.9.0 , Due to other system dependencies this is the only version I can run. If you have any suggestions I will be happy to try and provide you feedback. I would definitely like to try to extract the "keypoints", "descriptors" & depths of the keypoints" in the driver. I have been looking at the interface rtabmap_msgs/msg/RGBDImages , it already has fields for a) key_points b) descriptors & c) points. I see that the "points" field is used only in RGBD mode, I will use the "points" field as the depth for keypoints in stereo-mode . Would you be interested in merging my changes (after review of course), that way I do not have to maintain my own version. Thank you. Sandeep |
Hi Mathieu,
My apologies, it seems I just need to populate the "key_points", "descriptors" & points in the driver the rest of the code is already there Sandeep |
Administrator
|
Hi Sandeep,
Yes, if you fill the data in RGBImage topic, rtabmap's VO nodes and/or rtabmap node can use them directly. That is also possible to fill just the key_points/points/descriptors/camera_info fields without any image. For the optical flow opencv/cuda crash, you may verify if the inputs look okay, like is d_leftCorners empty? https://github.com/introlab/rtabmap/blob/17c2142c98b80648a272198a144b12b790abbfd6/corelib/src/Stereo.cpp#L198 cheers, Mathieu |
Hi Mathieu,
That is great to know, if I understand the code correctly, I need to send the KPs + Desc + KP3D + images in mapping mode (since I want the Occupancy Grid), but in localization mode I can eliminate the Images, is that right ? I also see that the descriptors are (filtered_by_depth), how will this be effected ? As for the crash I will investigate this a little bit more and let you know. Thank you Sandeep |
Administrator
|
yes Either with or without depth image, if descriptors are provided (with Mem/UseOdomFeatures=true), the 3D points are used to filter by depth, if Kp/MinDepth and/or Kp/MaxDepth are set. For visual odometry, they will be filtered if Vis/MinDepth and/or Vis/MaxDepth are set. In summary, if you provide key_points, points and descriptors in RGBDImage topic with or without depth image, the result should be the same. cheers, Mathieu |
Free forum by Nabble | Edit this page |