Based on c++ api, you could create two CameraRealsense2 objects with corresponding device id. To make a simpler example, you can avoid using events following this
example (API of the events library can be found
here, note that a modified version is included in rtabmap, but the basic functionality is the same). In your main loop, query images from both cameras, then you will have to merge the two SensorData into a single one. Note that we can merge only if both cameras output the same resolution.
To merge the SensorData, we have to put the images in same opencv matrix:
std::unique_ptr<rtabmap::Odometry> odometry(rtabmap::Odometry::create(rtabmap::Odometry::kTypeF2M));
rtabmap::Rtabmap rtabmap;
double previous_stamp = 0;
rtabmap::CameraRealsense2 camera1;
rtabmap::CameraRealsense2 camera2;
rtabmap::SensorData data1 = camera1.takeImage();
rtabmap::SensorData data2 = camera2.takeImage();
int width = data1.imageRaw().cols;
int height = data1.imageRaw().rows;
cv::Mat rgb(height, width*2, data1.imageRaw().type());
data1.imageRaw().copyTo(cv::Mat(rgb, cv::Rect(0, 0, width, height)));
data2.imageRaw().copyTo(cv::Mat(rgb, cv::Rect(width, 0, width, height)));
cv::Mat depth(height, width*2, data1.depthOrRightRaw().type());
data1.depthOrRightRaw().copyTo(cv::Mat(depth, cv::Rect(0, 0, width, height)));
data2.depthOrRightRaw().copyTo(cv::Mat(depth, cv::Rect(width, 0, width, height)));
// If RGB-D:
std::vector<rtabmap::CameraModel> models;
models.push_back(data1.cameraModels()[0]);
models.push_back(data2.cameraModels()[0]);
rtabmap::SensorData combined_data(rgb, depth, models, 0, data1.stamp());
// If Stereo
std::vector<rtabmap::StereoCameraModel> models;
models.push_back(data1.stereoCameraModels()[0]);
models.push_back(data2.stereoCameraModels()[0]);
rtabmap::SensorData combined_data(rgb, depth, models, 0, data1.stamp());
OdometryInfo info;
Transform pose = odometry->process(combined_data, &info);
// Update map every second
if(previous_stamp==0 || combined_data.stamp() - previous_stamp > 1) {
rtabmap.process(data, pose);
previous_stamp = combined_data.stamp();
}
With the event approach (like this
example), you could create a thread getting images from both camera, then publish the resulting "combined_data" into a CameraEvent.
CameraRealsense2 does rectification/depth alignment internally.