RTABMAP standalone for localization in xyz (Kinect Xbox 360)

classic Classic list List threaded Threaded
40 messages Options
12
Reply | Threaded
Open this post in threaded view
|

RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
I want to get the localised coordinates of my camera attached to a robot as x,y,z value in a map already made using the rtabmap mapping in windows. For every detection i need the output coordinates please reply as simply as possible i need the above for my slam and localisation project. I am using Kinect XBOX 360 and only visual odometry. Thanks in advance! :)
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
Hi,

did you try this tutorial in localization mode?
$ roslaunch freenect_launch freenect.launch depth_registration:=true
$ roslaunch rtabmap_ros rtabmap.launch localization:=true database_path:="my_map.db"
cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
Can I do it without ros I am running rtabmap on Windows and on Ubuntu 16.04 ros kinetic is giving errors when downloading packages.
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
If you want real-time outputs, you should go with the ROS package. What are the errors when downloading packages?
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
CloudViewer.cpp:1931::setCloudVisibility() Cannot find actor named "cloudOdom".

The standalone version of matlab gives this error

On ROS version the error is E: Unable to locate package ros-kinetic-rtabmap-ros

If i build from source :
E: Unable to locate package libdc1394-dev
E: Unable to locate package ros-kinetic-openni-launch
E: Unable to locate package ros-kinetic-openni2-launch
E: Unable to locate package ros-kinetic-freenect-launch
E: Unable to locate package ros-kinetic-costmap-2d
E: Unable to locate package ros-kinetic-octomap-ros
E: Unable to locate package ros-kinetic-g2o
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
Hi,

For ROS, make sure your computer is correctly set (following these installation instructions) so that apt-get can find the ROS packages.

The error "Cannot find actor named "cloudOdom"." should not happen normally. The only place in the code where I can see that it could be called while the CloudViewer doesn't have this cloud is from here. If it does, you may have also other errors shown (like "Adding cloudOdom to viewer failed!"), otherwise not sure why cloudOdom is not added to CloudViewer here.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
Thank you the above all worked. If we have 2 images taken by the kinect how do i find the motion that has happened between them could you please provide the code. :)
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
I want to know how the whole localization process is happening and if given 2 rgb images how to find the robot motion. I want to code it myself taking help from your code i would be very grateful if you can reply to me quick. Thanks in advance :)
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
This post was updated on .
Hi,

RTAB-Map doesn't do motion estimation using only RGB, it requires also depth images (or stereo images). Without going deep in all options that can be done for motion estimation, here is a simple example using RegistrationVis class:
#include <rtabmap/core/RegistrationVis.h>
#include <rtabmap/core/CameraRGBD.h>
using namespace rtabmap;

int main(int argc, char * argv[])
{

CameraFreenect camera; // Assume Kinect for XBOX 360
camera.setImageRate(1); // capture at 1 Hz
Transform opticalRotation(0,0,1,0, -1,0,0,0, 0,-1,0,0);
camera.setLocalTransform(opticalRotation); // from camera frame to rtabmap/ros frame
if(!camera.init())
{
   printf("Failed to initialize the camera!\n");
   return -1;
}

RegistrationVis registration;

SensorData frame = camera.takeImage();
SensorData previousFrame = frame;
while(frame.isValid())
{
   Transform t = registration.computeTransformation(previousFrame, frame);
   if(t.isNull())
   {
      printf("Could not compute motion\n");
   }
   else
   {
      printf("Motion: %s\n", t.prettyPrint().c_str());
   }
   previousFrame = frame;
   frame = camera.takeImage();
}

return 0;
}

Well, the RegistrationVis class may be a little complicated at first look, it is because it can handle the case where visual features have to be computed and matched. The actual transformation computation is done by functions in util3d_motion_estimation.h. For a more general usage of RegistrationVis, you can look at the OdometryF2F class.

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
Can I run this code using visual studio?
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
In reply to this post by matlabbe
Plus I currently don't have access to the Kinect I have stored RGB and depth images how can I supply them?
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
This post was updated on .
In reply to this post by rit2014006
On Windows, I would use CameraOpenni2. But yes, if you are able to build rtabmap on Windows, you should be able to use this.

EDIT
For a folder of RGB and depth images: the images should be already rectified, the depth should be PNG (32bits in meter or 16bits in mm), registered to RGB camera and you should have a calibration file (e.g., "calibration.yaml"):
CameraRGBDImages camera(pathRGBImages, pathDepthImages);
Transform opticalRotation(0,0,1,0, -1,0,0,0, 0,-1,0,0);
camera.setLocalTransform(opticalRotation);

if(camera.init(".", "calibration")) // use "calibration.yaml" in the current directory
{
    ...
}

You can use rtabmap app to create a calibration file if you know already the focal length of the RGB camera: click on Preferences->Source->Create Calibration.

cheers,
Mathieu

PS: I also modified the example above with the missing optical transformation.
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
Sorry i am very new to all this. How do i run the above file in Ubuntu 16.04?
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
Hi,

On Ubuntu it is the easier way to build rtabmap, as all dependencies can be downloaded as binaries (apt-get).

Build rtabmap from source:
https://github.com/introlab/rtabmap/wiki/Installation#ubuntu

Create an app using rtabmap libraries:
https://github.com/introlab/rtabmap/wiki/Cplusplus-RGBD-Mapping

cheers,
Mathieu
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
libpcl 1.7 gives an error that package is not available while downloading and I can't add it's ppa for Ubuntu 16.04
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
Hi,

What is the line you used? libpcl-dev should be available by default on Ubuntu 16.04: http://packages.ubuntu.com/xenial/libpcl-dev
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
Thank you if i want to find locations from transforms should i do a vector addition between successive transformations?
Plus what is the format of the transformations
Motion: xyz=0.094200,-0.043168,-0.015102 rpy=0.015845,0.042616,-0.123761
x y z is in meters i guess
Plus how can i access the individual elements of transformation t?
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
What is the algorithm used for computing which features match between 2 images and then find the transformations?
Also how to do localization from individual transformations between images?
Thanks :)
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

rit2014006
#include <rtabmap/core/RegistrationVis.h>
#include <rtabmap/core/CameraRGBD.h>
#include <rtabmap/core/util3d_transforms.h>
#include <iostream>
using namespace rtabmap;

int main(int argc, char * argv[])
{

//CameraFreenect camera; // Assume Kinect for XBOX 360
//camera.setImageRate(1); // capture at 1 Hz

CameraRGBDImages camera("/home/rahul/uimages/rgb/", "/home/rahul/uimages/depth/");
//Transform opticalRotation(0,0,1,0, -1,0,0,0, 0,-1,0,0);
//camera.setLocalTransform(opticalRotation);
camera.init(".", "calibration");
/*if(camera.init(".", "calibration")) // use "calibration.yaml" in the current directory
{
    
}*/

//Transform opticalRotation(0,0,1,0, -1,0,0,0, 0,-1,0,0);
//camera.setLocalTransform(opticalRotation); // from camera frame to rtabmap/ros frame
if(!camera.init())
{
   printf("Failed to initialize the camera!\n");
   return -1;
}

RegistrationVis registration;

SensorData frame = camera.takeImage();
SensorData previousFrame = frame;

int k = 1;
cv::Point3f p(0,0,0);

while(frame.isValid())
{
   Transform t = registration.computeTransformation(previousFrame, frame); 
   
   if(t.isNull())
   {
      printf("Could not compute motion\n");
   }
   else
   {
	p = util3d::transformPoint(p,t);
	std::cout<<p<<"\n";
   /*     Transform t2 = t1;
	t2 = t2.to3DoF();
	t1 = t1 * t;      
	//printf("Motion: %s\n\n%s\n", t.prettyPrint().c_str(),t1.prettyPrint().c_str());
	 printf("%d Motion: %s\n", k,t2.prettyPrint().c_str());*/
   }
   previousFrame = frame;
   frame = camera.takeImage();
   k++;   
}

return 0;
}
 
The above code is returning wrong output if i am trying to localize and why are we using an optical rotation matrix and it is giving wrong output with and without it can you please explain what is wrong with my code. Thank you :)
Reply | Threaded
Open this post in threaded view
|

Re: RTABMAP standalone for localization in xyz (Kinect Xbox 360)

matlabbe
Administrator
Hi,

Can you share a couple of images to test the code with? Along with calibration file you used, to see if the inputs are correct at first.

In the first example, the keyframe changes at every frame, so multiplying the transform would give you the pose from the starting position of the camera. If t is null, you may not change the keyframe:
Transform pose = Transform::getIdentity();
while(frame.isValid())
{
   Transform t = registration.computeTransformation(previousFrame, frame); 
   
   if(t.isNull())
   {
      printf("Could not compute motion\n");
   }
   else
   {
        pose *= t;
	printf("%d Motion: %s\n", k, pose.prettyPrint().c_str());
        previousFrame = frame;
   }
   frame = camera.takeImage();
   k++;   
}

The transform is printed in meters and angles in radian: "x y z roll pitch yaw". Features are by detected by default by OpenCV GFTT detector, then BRIEF descriptors are extracted from the keypoints. Descriptors are matched using NNDR (Nearest Neighbor Distance Ratio) approach.

EDIT
The optical rotation is optional, but required if you want the poses in ROS referential: x-forward, y-left, z-up. Without optical rotation, the poses will be in camera frame: x-right, y-down, z-backward.

cheers,
Mathieu
12