Hey Mathieu,
I have just started to do some research in stereo slam and found your work is an excellent starting point for me. Thank you very much. I have checked out the indigo-devel branch and built the standalone program with visual studio 2012. Now I can run the main program RTAB-map. But I met 2 issues: Issue 1: Some warning messages appears when the program was running. These warnings do not crash the program. But I am not so sure if I have built the system correctly. Generic Warning: In ..\..\..\Rendering\Core\vtkPolyDataMapper.cxx, line 28 Error: no override found for 'vtkPolyDataMapper'. Warning: In ..\..\..\Rendering\Core\vtkInteractorStyleSwitchBase.cxx, line 43 vtkInteractorStyleSwitchBase (172EDDD0): Warning: Link to vtkInteractionStyle for default style selection. ERROR: In ..\..\..\Rendering\OpenGL\vtkOpenGLTexture.cxx, line 200 vtkOpenGLTexture (153E6CF8): No scalar values found for texture input! Issue 2: I can't repeat your stereo mapping example. The "start" button is always disabled (in grey) when I tried the database (stereo_20Hz.db) based method and changed the settings in "preference". When I open the db from main menu, it seems like the program only plays the db, tries to detect closure but does not build the map. Could you give me some suggestion about how to solve these issues? Thanks a lot. Best regards Yanming |
Administrator
|
Hi,
The "Start" button will be enabled only after a database is created or opened. To start from a clean database, click "File->New database...". Is a graph being built in the Graph view? (Window->Show view...->Graph view) If loop closure detection view is showing some images, the map is likely building. If you are seeing nothing in 3D Map view (even the default coordinate or a grid when you right-click on it and select "show grid"), there is something wrong with the OpenGL visualizer. Which PCL and VTK versions are you using? cheers |
Hi Mathieu,
Thanks for your quick reply. I think there should be some problems in the openGL part as I got some warning messages in the launching process. I used the PCL-1.7.2-AllInOne-msvc2012-win32.exe to install the PCL package. It contains all the 3rd party programs required by PCL, including vtk 6.2. But the included vtk 6.2 does not contain vtkGUISupport*. So I downloaded source code of vtk 6.2, compiled it and copied only the vtkGUISupportQt* to the PCL 3rdParty corresponding folder. Are there any problems in my operation? or Do I need to use another version of PCL and vtk? Best regards Yanming |
Hi Mathieu,
I went into the code, and found the warning message ERROR: In ..\..\..\Rendering\OpenGL\vtkOpenGLTexture.cxx, line 200 vtkOpenGLTexture (153E6CF8): No scalar values found for texture input! comes from the CloudView::CloudViewer and caused by the _visualizer->setCameraPosition. Do you have any suggestion about how to solve this issue? Thanks a lot. Yanming |
Administrator
|
Hi,
Are you referring to these lines? You can try to comment these lines to see if it could avoid the problem. My Windows computers are on VTK 5.8 or 5.10. Not sure about VTK 6.x, though PCL 1.7.2 seems to support it. I am also using Visual Studio 2010. For info, where did you find PCL-1.7.2-AllInOne-msvc2012-win32.exe? It is also better to build RTAB-Map on x64 (large mapping requires more RAM). You could also try to build PCL from source. If I remember, I installed standard Qt4, then built VTK with Qt Support (to have QVTKWidget). Boost, flann, eigen, QHull and OpenNI could be installed from the binaries on the PCL website. cheers, Mathieu |
Hi,
Thanks for your reply. I will try to comment those lines and check what will happen. I found the AllInOne package here. http://unanancyowen.com/?p=1255&lang=en. It has already taken me 1 week to install the program on Windows. Now I am thinking about if I should give up. I have installed the program on Ubuntu without any problem. But it is on desktop and can't be taken home. And my laptop is on Windows. 2 more questions: 1. Is it possible to debug your program frame by frame? I am interested at stereo slam. Is it possible to postpone the input of next frame until I finished the debug of the current frame. This means when I follow the process step of the nth frame, the n+1th frame will not be sent to the program and when I stop the debugging of the nth frame, the n+1th frame will be input to the program. 2. Could you give some short description about the method you used in stereo odometry and map assembler? Some keywords for further googling are also very helpful. Based on my understanding, with the input frames, you first do stereo odometry, build the graph and optimize the graph when closure is detected, and then assemble the local map as a global map. (Correct me if I am wrong) Your paper is mainly focused on closure detection, and you also give out reference paper for graph optimization. But I couldn't find any clues for stereo odometry and map assembler used in your program. Could you give some more information about them? It will be much easier to read code with those information. I really appreciate your help Best regards Yanming |
Administrator
|
This post was updated on .
Hi,
Indeed, Ubuntu makes things a lot easier to build from source. Don't know if you checked it, but there are instructions here for how to build RTAB-Map from source with Visual Studio. 1. Debugging the whole process frame by frame can be difficult because the modules depend on time. For example, Camera would publish images at 30 Hz, Odometry process these images as fast as it can (maybe ~15H) and Map update is done at 1 Hz. You can set Logger level to debug to see what is going on and set Camera rate to 1 Hz for example. You can also do "shift click" on the pause button so the camera send only one frame. Debugging without events could be also more easier. One way is to build a program without threads and events like this new C++ example. 2. -The map assembler is simply done by creating a point cloud from data of each node of the graph, then their optimized pose are used to assemble them. Each time the graph is optimized again, the nodes may move so all point clouds are updated according to new poses. -For visual odometry information, see this post and this post. Well, the new c++ example referred above may help to understand better too. cheers |
Hi,
Thanks for your quick reply. I have checked your instructions about building RTAB-Map with MSVC. Do you work in release mode for most of time and do debugging mainly via log file? I tried to build in debug mode in IDE. Do you think it is necessary? Best regards Yanming |
Administrator
|
Hi,
I am always using multi-threaded RTAB-Map, so I debug only with the log file (or log in console). By lowering the frame rate of the camera, the log can be easier to read. You can build in Debug mode if you want to do step by step debugging with the IDE using the example above. On my computer I have difficulty to build in Debug, as some dependencies (like VTK) were not built in Debug mode. cheers, Mathieu |
Hi Mathieu,
I really appreciate your help. It is rather straight forward to run your tutorial of stereo mapping on ROS, but I still can't not repeat the tutorial on stand alone RTAB-Map application on Ubuntu. I use the head version of indigo_devel branch. If I selected the a database of stereo images as input, the "Menu Detection -> Select source" is always disabled, even after I loaded the stereo.ini file. If I open the database from the file menu and press the "start" button, the app crashes. The error message is as follows: terminate called after throwing an instance of 'cv::Exception' what(): /build/buildd/opencv-2.4.8+dfsg1/modules/core/src/persistence.cpp:2252: error: (-212) ......... /Documents/RTAB-Map/camera_info/stereo_20Hz_left.yaml(1): Valid XML should start with '<?xml ...?>' in function icvXMLParse I had done some camera calibration with opencv before. Their output is in yml format. yaml is the format of outputs from ROS. I am not sure if this is the root reason. Could you give me some instruction on this problem? Best regards Yanming |
Administrator
|
Hi,
I've updated the tutorial for directory of images and database. The problem was that OpenCV yaml reader needs the header "%YAML:1.0" set in the files, otherwise it thinks it is a xml. I've uploaded again the archive with the modifications. However, ROS uses python to read yaml files, and python has an error reading header "%YAML:1.0", so you must comment the line as "#%YAML:1.0" for the launch file. This is all explained now in the tutorial. If you don't want to load the archive again, here the yaml files that work with the standalone. stereo_20Hz_left.yaml: %YAML:1.0 camera_name: stereo_20Hz_left image_width: 640 image_height: 480 camera_matrix: rows: 3 cols: 3 data: [ 4.8760873413085938e+02, 0., 3.1811621093750000e+02, 0., 4.8760873413085938e+02, 2.4944424438476562e+02, 0., 0., 1. ] distortion_coefficients: rows: 1 cols: 5 data: [ 0., 0., 0., 0., 0. ] distortion_model: plumb_bob rectification_matrix: rows: 3 cols: 3 data: [ 1., 0., 0., 0., 1., 0., 0., 0., 1. ] projection_matrix: rows: 3 cols: 4 data: [ 4.8760873413085938e+02, 0., 3.1811621093750000e+02, 0., 0., 4.8760873413085938e+02, 2.4944424438476562e+02, 0., 0., 0., 1., 0. ] stereo_20Hz_right.yaml: %YAML:1.0 camera_name: stereo_20Hz_right image_width: 640 image_height: 480 camera_matrix: rows: 3 cols: 3 data: [ 4.8760873413085938e+02, 0., 3.1811621093750000e+02, 0., 4.8760873413085938e+02, 2.4944424438476562e+02, 0., 0., 1. ] distortion_coefficients: rows: 1 cols: 5 data: [ 0., 0., 0., 0., 0. ] distortion_model: plumb_bob rectification_matrix: rows: 3 cols: 3 data: [ 1., 0., 0., 0., 1., 0., 0., 0., 1. ] projection_matrix: rows: 3 cols: 4 data: [ 4.8760873413085938e+02, 0., 3.1811621093750000e+02, -5.8362700032946350e+01, 0., 4.8760873413085938e+02, 2.4944424438476562e+02, 0., 0., 0., 1., 0. ] thx for the report, Mathieu |
This post was updated on .
Hi Mathieu,
I have read your code these 2 days. Some questions are summarized below. 1. There are 2 instances of class Memory for odometry and closure detection separately. Right? 2. The conditions to call optimization function are different between ros version and standalone version. Could you give me a short introduction about the conditions for standalone version? Thanks. 3. Is the rtabmap program able to export the trajectory of the stereo camera? Thanks a lot! Yanming |
Administrator
|
Hi Yamming,
1. Yes. The one used for OdometryBOW is just for convenience when creating signatures (to avoid duplicating code already done by Memory class). The main purpose of the Memory class is for mapping and database management, the one used by Rtabmap class. Rtabmap uses created signatures in it for loop closure detection. 2. By default, optimization should be done exactly the same in the standalone application and rtabmap node. The rtabmap node (CoreWrapper.cpp) is just wrapper of Rtabmap class for ROS connections. The mapping stuff is still done in Rtabmap class. Are you referring to map_optimizer node? This node can be used to optimize the graph outside rtabmap node, and its parameters are more limited than optimization done inside rtabmap. Normally, you shouldn't use this node, it is for a more advanced and specific usage. 3. Yes, see Edit->Advanced->"Export Poses..." menu in the standalone (when paused or stopped). In ROS, you can subscribe to /rtabmap/mapData or /rtabmap/mapGraph (the last one is available only from 0.10.6). The "poses" field contain position of each node added to map. cheers, Mathieu |
Hi Mathieu,
Thanks for your prompt reply. Yanming |
Hi Mathieu,
I have some questions about how to setup experiements, which may not be that direclty related to your algorithms. I really appretiate it if you could give me some answers. Now I have a P3-DX and a stereo camera. I want to install the camera on the P3-DX and then measure the accuracy of the map and trajectory estimated by your algorithms. My questions are: 1. Our robot will run indoors in a good enviorment. The floor is not slippy and very flat. I want to use the output of wheel encoder as the ground truth of trajectory. Is this OK? 2. Do I need to measure the relative pose between the camera, the P3-DX platform, and the wheel? I notice you set an ideal value to "camera_base_link" in the launch file of your tutorial. Could you provide some links or papers for me how to deal with these things? 3. Could you provide some papers about how to measure the accracy of maps and trajectories? Thanks a lot. Best regards Yanming |
Administrator
|
Hi,
1. You can't use the wheel encoder as ground truth. Generally, ground truths in indoor environments are recorded using external motion capture system (like a VICON). See RGB-D SLAM datasets for examples. For outdoor environments, they use a DGPS or a GPS+IMU with some manual corrections. 2. If you want to compare trajectories between wheel odometry and visual odometry, you may have to put the results in the same referential (e.g., in /base_link frame). See this tutorial about TF on a robot. If the camera is looking forward, you may not have to add a special transform between /base_link and /camera_link. In the stereo tutorial, the camera is looking down at ~30 degrees and the mapping is not in 2D, so a relative transform between the camera and the base of the robot is required so that visual odometry is in the base referential and not the camera referential. 3. The absolute trajectory error (ATE) or relative pose error (RPE) metrics are commonly used (see RGB-D SLAM datasets or KITTI datasets). cheers |
Free forum by Nabble | Edit this page |