Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi~
when using rtabmap in 2D mapping it always correct the Z value whilch should be zero. What should i do to specify to 2D mapping? thank u so much~ |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
and what's the default feature does the rtabmap use for if no compiled OpenCV nonfree module? thanks~
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
|
Hi,
On the standalone in Preferences (Advanced): check Motion Estimation -> "Force 2D" and Graph Optimization -> "2d SLAM". For ROS, see Three DOF mapping section on this page. cheers |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
and what's the default feature does the rtabmap use for if no compiled OpenCV nonfree module? thanks~
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
|
ORB by default |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Thanks~
I think the robot odom is very import, because for my case the robot odom is not so correct, 10% bias. The result is rtabmap can not good fusion. I don't know how to do. Do u give me some advice? Thanks a lot. |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
In reply to this post by matlabbe
as the odom is very import, i use the robot-pose-ekf to get good odom. i fused vo and robot odom to odom-combined. but for my robot, the camera is not fixed with the base, so do i need transform the vo to base and then subscribed by ekf or directly used by ekf? thank u~
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
In reply to this post by matlabbe
also, the robot move speed is very import. Like turtlebot use rtabmap to map, turtlebot should limit speed to 0.1m/s? is't OK?
thanks~ |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
|
Hi,
The TF between base and camera frames doesn't need to be fixed, but should be known (and very precise) so that images can be correctly transformed in base frame for visual odometry. The robot maximum speed depends at which frame rate the odometry can be computed. If odometry can be computed at 30 Hz, you can move fast. If odometry can only be computed at 3-5Hz, you may need to move slowly (in particular rotating slowly). There also some others conditions that can affect odometry, see "Lost Odometry" section on this page. You can also look for robot_localization package for sensor fusion. cheers |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Thanks~
as u say, the /base_footprint to /camera_link should be very precise. but for my RGBD camera, the RGB and the Depth camera is isolated. i have done the calibration. so the cameras pose matrix should be write to the tf? if not, how to use the the pose matrix? thank u so much~ |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
|
If your cameras are calibrated together it is fine. Make sure the depth is registered to color camera and they are fixed together (RGB and depth cameras move together), then only frame of the rgb camera is required. I was referring to /base_footprint -> /camera_link (if the RGB-D camera is not fixed on the robot), not /rgb_camera -> /depth_camera.
|
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
thank u, i understand that~
by the way, the odom should be very precise. but how should i know it's very precise? any suggestion? thank u^_^ |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
|
Hi,
My "very precise" comment above was about the transform between /base_link and /camera_link (which is generally fixed), not /odom -> /base_link (which is odometry). Well, the better the odometry is, the better the localization is. RTAB-Map can correct odometry drift on loop closure, but if there is no loop closure for a while (robot is exploring always new places) and if odometry is bad, you would have not so meaning localization after a couple of meters, which is difficult for navigation. Some robots can move tens of meters with only 1 cm absolute drift, though others can have 1 meter absolute drift or more. cheers |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Hi~
in other words, the odom's needed precise is based on the close loop, that way for a stable environment, what params can help to close loop? thank u so much~ |
Loading... |
Reply to author |
Edit post |
Move post |
Delete this post |
Delete this post and replies |
Change post date |
Print post |
Permalink |
Raw mail |
Administrator
|
If the robot doesn't plan trajectories to close loops, it will never find a loop closure. So the odometry should be precise enough to be able to navigate in a "zone" with uncertainties that you may detect a loop closure. Then if the robot is navigating back in a previous area, then loop closure detection approach may detect it or not. For more info, you can see this paper: Appearance-Based Loop Closure Detection for Online Large-Scale and Long-Term Operation.
If you have already a map, then the camera should be able to "see" the environment the same way (same direction) than when you mapped previously, otherwise it will not localize. cheers |
Free forum by Nabble | Edit this page |