needing help about some principles

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

needing help about some principles

Mostafa_TCO
Hi rtabmap
me and my team are beginner with ROS and rtabmap, we have some questions and we will be grateful if you could give us a little help.
our main work:
deminer robots ( contains two robot: one seeker robot that search area for mines and then calls another robot to grab it with a gripper and take it out from the map, note that the map has obstacles in it and robots must avoid them )
our gears:
1) kinect v1 , 2) rplidar a1 , 3) two wheel robots with encoders , 4) mini-pc with ubuntu 14.04 and ros indigo, 5) arduino nano
what we did till now:
reading and working on ros, rtabmap (Kinect + Odometry + laser and Kinect + Odometry + fake laser), gmapping, hector slam, navigation stack, object recognition kitchen,  find object 2d/3d
question:
1) best way to detect mines with kinect v1 ( we tried find object 2d/3d and ork )
2) best way to autonomous navigate and avoid obstacles/path planing ( most important problem, we stuck in this )
3) how to get encoders data/odometry from the wheels and use it in ros
4) best way to communicate between two robots in ros ( help about publish and listen )

in general we mostly need some insights about our main work, we partially know about our main work but we do not know how to connect all of this stuff together
Reply | Threaded
Open this post in threaded view
|

Re: needing help about some principles

matlabbe
Administrator
Hi,

There are some ROS-related stuff in your question. For stuff not related to mapping, you may get also more help from asking on ROS answers.

1) For object recognition, it depends on what look like the "mines". find_object_2d package is based on visual features of OpenCV, which may not be the best when there are not so much discriminative features that can be extracted. There are some parameters tuning possible for the feature type used to extract more or less good features. Playing live with these parameters with the gui can help to see what works better. Otherwise,  neural network/template matching approaches (e.g., YOLO, TLD) may be also useful to get a look.

2) I will refer you to tutorials of the navigation stack of ROS. As a follow up, here is another example of integration of navigation stack with RTAB-Map using a turtlebot robot.

3) You would have a node reading the encoders, estimating the odometry and publish it on ros. See this page. To have best mapping results, the odometry should not drift too much. Make sure to calibrate (at least manually) correct values to get decent odometry (e.g., having less 5 degrees error after 360 turn, or less 5 cm error when moving 5 meters)

4) You may choose one robot to be the master, or having a third computer which acts as the master. See this tutorial. Are the robots will be running at the same time? If you want robot B to be localized in the map that is actually building robot A, it may not be possible in available mapping packages without modifications. However, if the robot A maps the environment, then copy the map file to robot B, the robot B could localize in it.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: needing help about some principles

Mostafa_TCO
Hi Mathieu,
Thank you for the effort and time you put in answering our questions

Best regards,
Mostafa_TCO