Multiple realsense without ROS

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Multiple realsense without ROS

StefanoMutti
Hi all,
i am starting with the RGBD mapping example and i want to integrate multiple realsense cameras.

I know ROS packages allow for that but I cannot use it.

For each camera i modify the local transform "cam1->setLocalTransform()",  and then I don't know if to attach all the events to the RGBD-map or to Odometry classes.
Are there some docs regarding this event-based framework?
Furthermore, the realsense rectification is handled internally?
Thanks

Reply | Threaded
Open this post in threaded view
|

Re: Multiple realsense without ROS

matlabbe
Administrator
Based on c++ api, you could create two CameraRealsense2 objects with corresponding device id. To make a simpler example, you can avoid using events following this example (API of the events library can be found here, note that a modified version is included in rtabmap, but the basic functionality is the same).  In your main loop, query images from both cameras, then you will have to merge the two SensorData into a single one. Note that we can merge only if both cameras output the same resolution.

To merge the SensorData, we have to put the images in same opencv matrix:

std::unique_ptr<rtabmap::Odometry> odometry(rtabmap::Odometry::create(rtabmap::Odometry::kTypeF2M));
rtabmap::Rtabmap rtabmap;
double previous_stamp = 0;

rtabmap::CameraRealsense2 camera1;
rtabmap::CameraRealsense2 camera2;

rtabmap::SensorData data1 = camera1.takeImage();
rtabmap::SensorData data2 = camera2.takeImage();

int width = data1.imageRaw().cols;
int height = data1.imageRaw().rows;
cv::Mat rgb(height, width*2, data1.imageRaw().type());
data1.imageRaw().copyTo(cv::Mat(rgb, cv::Rect(0, 0, width, height)));
data2.imageRaw().copyTo(cv::Mat(rgb, cv::Rect(width, 0, width, height)));

cv::Mat depth(height, width*2, data1.depthOrRightRaw().type());
data1.depthOrRightRaw().copyTo(cv::Mat(depth, cv::Rect(0, 0, width, height)));
data2.depthOrRightRaw().copyTo(cv::Mat(depth, cv::Rect(width, 0, width, height)));

// If RGB-D:
std::vector<rtabmap::CameraModel> models;
models.push_back(data1.cameraModels()[0]);
models.push_back(data2.cameraModels()[0]);
rtabmap::SensorData combined_data(rgb, depth, models, 0, data1.stamp());

// If Stereo
std::vector<rtabmap::StereoCameraModel> models;
models.push_back(data1.stereoCameraModels()[0]);
models.push_back(data2.stereoCameraModels()[0]);
rtabmap::SensorData combined_data(rgb, depth, models, 0, data1.stamp());

OdometryInfo info;
Transform pose = odometry->process(combined_data, &info);

// Update map every second
if(previous_stamp==0 || combined_data.stamp() - previous_stamp > 1) {
  rtabmap.process(data, pose);
  previous_stamp = combined_data.stamp();
}

With the event approach (like this example), you could create a thread getting images from both camera, then publish the resulting "combined_data" into a CameraEvent.

CameraRealsense2 does rectification/depth alignment internally.
Reply | Threaded
Open this post in threaded view
|

Re: Multiple realsense without ROS

StefanoMutti
Hi Matlabbe,
thanks for the answer.
Concerning the relative transformation between cameras and base_frame, is everything embedded in the "models.push_back(data1.cameraModels()[0])" camera models?
Thanks
Reply | Threaded
Open this post in threaded view
|

Re: Multiple realsense without ROS

matlabbe
Administrator
Yes, if you set "localTransform" for CameraRealsense2, that transform will be embedded in the CameraModel.

cheers,
Mathieu