Hello Mathieu,
I'm using RTAB-Map in "library" mode (without ROS), using my C++ code to manage the slam on a ground robot. I'm able to perform the slam, but during the process (and at the end of it in particular) I would like to get an optimized occupancy grid of the environment to run my 2D navigation algorithms on it. Could you tell me where to look to find a C++ example about how to generate the `rtabmap::OccupancyGrid` ? Thank you very much, Antonio |
Administrator
|
Hi Antonio,
I'll do an an example based on the C++ RGBD tutorial. We need to set RGBD/CreateOccupancyGrid to true so that local occupancy grids are created for each node in the map (by default on ROS it is set to true). Rtabmap * rtabmap = new Rtabmap(); ParametersMap parameters; parameters.insert(ParametersPair(Parameters::kRGBDCreateOccupancyGrid(), "true")); rtabmap->init(parameters);To tune occupancy grid parameters, look at the Grid group of parameters. Then to assemble a global map with OccupancyGrid class, we can do it after mapping by adding this code at the end of the main.cpp: // Save 2D map printf("Saving occupancy grid \"map.pgm\"...\n"); rtabmap::OccupancyGrid grid; for(std::map<int, Transform>::iterator iter=optimizedPoses.begin(); iter!=optimizedPoses.end(); ++iter) { Signature node = nodes.find(iter->first)->second; // uncompress grid data cv::Mat ground, obstacles; node.sensorData().uncompressData(0, 0, 0, 0, &ground, &obstacles); grid.addToCache(node.id(), ground, obstacles); } grid.update(optimizedPoses); float xMin,yMin; cv::Mat map = grid.getMap(xMin, yMin); if(!map.empty()) { cv::Mat map8U(map.rows, map.cols, CV_8U); //convert to gray scaled map for (int i = 0; i < map.rows; ++i) { for (int j = 0; j < map.cols; ++j) { char v = map.at<char>(i, j); unsigned char gray; if(v == 0) { gray = 178; } else if(v == 100) { gray = 0; } else // -1 { gray = 89; } map8U.at<unsigned char>(i, j) = gray; } } if(cv::imwrite("map.pgm", map8U)) { printf("Saving occupancy grid \"map.pgm\"... done!\n"); } else { printf("Saving occupancy grid \"map.pgm\"... failed!\n"); } } cheers, Mathieu |
Thank you very much for the quick and detailed reply!
Best, Antonio |
Mathieu,
could I ask you also if the method I'm using to feed the odometry into RTAB is correct? _camera_pose_listenerand _odometry_listenerare two objects that respectively give me the information about the pose of the camera with regard to the robot's base and the position of the robot's base in the environment (calculated from the encoders of the wheels) _visual_odometryis a pointer to rtabmap::OdometryF2M void iterate() { auto m = _camera_pose_listener->matrix(); const auto& o = _odometry_listener.operator*(); std::cout << "CameraPose:" << std::endl << m; _camera->setLocalTransform(rtabmap::Transform::fromEigen4d(m)); rtabmap::SensorData camera_frame = _camera->takeImage(); rtabmap::Transform wheel_odometry(o.x(), o.y(), o.theta()); std::unique_ptr<rtabmap::OdometryInfo> odometry_info(new rtabmap::OdometryInfo); rtabmap::Transform corrected_odometry = _visual_odometry->process(camera_frame, wheel_odometry, odometry_info.get()); bool added_to_map = _rtabmap->process(camera_frame, corrected_odometry); _is_visual_odometry_lost = odometry_info->lost; std::cout << "Added to map ? " << added_to_map << std::endl; std::cout << "Odometry lost ? " << _is_visual_odometry_lost << std::endl; std::cout << _rtabmap->getStatistics().mapCorrection().toEigen4d() << std::endl; } Cheers, Antonio |
Administrator
|
Hi,
The transform passed to _visual_odometry should be a guess for the incremental transform that should be computed. For example, with "op" the previous wheel odometry value: rtabmap::Transform wheel_odometry_previous(op.x(), op.y(), op.theta()); rtabmap::Transform wheel_odometry_current(o.x(), o.y(), o.theta()); rtabmap::Transform guess = wheel_odometry_previous.inverse() * wheel_odometry_current; rtabmap::Transform corrected_odometry = _visual_odometry->process(camera_frame, guess, odometry_info.get());It is also possible to not set any guess and/or set Odom/GuessMotion parameter to true for _visual_odometry: ParametersMap parameters; parameters.insert(ParametersPair(Parameters::kOdomGuessMotion(), "true")); If you want to force 2D visual odometry, you can add this parameter when creating visual odometry and rtabmap objects: ParametersMap parameters; parameters.insert(ParametersPair(Parameters::kRegForce3DoF(), "true")); If you want to navigate in feature-less environments (where visual odometry gets always lost), you may check if it is lost before passing the data to rtabmap, then if lost, pass the transform from your wheel odometry. You may also want to reset visual odometry to new pose: Transform pose = corrected_odometry; if(corrected_odometry.isNull()) { pose = _visual_odometry->getPose() * guess; _visual_odometry->reset(pose); } bool added_to_map = _rtabmap->process(camera_frame, pose); Note also that you may not want to add a new frame to map at the same frame rate than odometry, to reduce computation power required and memory. If odometry can be computed at ~50 ms, you could skip 20 iterations before adding a frame to rtabmap (to get default ~1Hz map update). See NoEventsExample for example. cheers, Mathieu |
Thank you very much for your help Mathieu,
Cheers, Antonio |
Hello Mathieu,
I have a question about the poses of the clouds in the 3D map. In the following code the "current_cloud_pose" is expressed with regard to which tf frame? the "global" /map (so already taking into account the correction of the odometry) or with regard to /odom (so it means that if I want to get the absoute position of the cloud in the environment I have to correct it using _rtabmap->getStatistics().mapCorrection() ?) std::map<int, rtabmap::Signature> map_nodes; std::map<int, rtabmap::Transform> map_poses; std::multimap<int, rtabmap::Link> map_links; _rtabmap->get3DMap(map_nodes, map_poses, map_links, false, true); auto clouds_in_widget = _cloud_viewer->getAddedClouds(); for (const auto& p : map_poses) { int current_cloud_id = p.first; rtabmap::Transform current_cloud_pose = p.second; std::string current_cloud_name = "MapCloud" + std::to_string(current_cloud_id); } Cheers, Antonio |
Administrator
|
They are in /map frame. Note that in your example the argument "optimized" of get3DMap() is false, which would give odometry poses without loop closure corrections.
The mapCorrection() transform is the transform /map -> /odom, which only makes sense to use to correct the latest odometry pose. Cheers, Mathieu |
Free forum by Nabble | Edit this page |