Re: Losing odometry when doing fixed point scanning
Posted by scanboy on
URL: http://official-rtab-map-forum.206.s1.nabble.com/Losing-odometry-when-doing-fixed-point-scanning-tp4336p4457.html
Hi Mathieu,
Thank you for clarifying the depth factor.
As for the Multi Camera I have implemented each camera successfully and the results are good when generating point clouds individually. But when trying to get the transformation between two multi camera captures the alignment is really off, xyz is not even near the offset and rpy makes the second multi camera capture roll, pitch and yaw heavily. e.g. (from pretty print xyz=-2.009485,1.608905,-0.912544 rpy=-0.358790,-0.188623,0.930016
For a simple test case I have taken two captures about 4.5 meters apart in an open space and the two points clouds overlap visibly and the images are also visibly together so I strongly think it should be possible to align properly.
I have tried to individually feature match each individual image instread of the row of images but don't see much difference.
What could be fundamentally wrong with the code below that first does feature matching and then motion estimate as you described previously in this thread?
I get these warnings
[ WARN] VWDictionary.cpp:525::clear() Visual dictionary would be already empty here (3401 words still in dictionary).
[ WARN] VWDictionary.cpp:529::clear() Not indexed words should be empty here (3401 words still not indexed)
[ WARN] VWDictionary.cpp:525::clear() Visual dictionary would be already empty here (2678 words still in dictionary).
[ WARN] VWDictionary.cpp:529::clear() Not indexed words should be empty here (2678 words still not indexed)
when running so I have my suspect there could be something wrong with the aggregateWordsAndP3f(...) function if not something else
Thanks!
#define DETECTOR_TYPE "SIFT" // Similar results with other types too
// First Capture
cv::Mat rowRgb0 = getRGBRow(my ix for first capture) // 24 or 8 images wide
cv::Mat rowDepth0 = getDepthRow(my ix for first capture)
data0 = SensorData(rowRgb0, rowDepth0, cameraModels); // models are ok as I can generate a good PC from each individual SensorData
cv::Mat gray;
cv::cvtColor(rowRgb0, gray, cv::COLOR_BGR2GRAY);
ParametersMap detectParams = Parameters::getDefaultParameters(DETECTOR_TYPE);
uInsert(detectParams, rtabmap::ParametersPair(rtabmap::Parameters::kKpMaxFeatures(), std::string("0")));
Feature2D *f2d0 = Feature2D::create(detectParams);
std::vector<cv::KeyPoint> kp0 = f2d0->generateKeypoints(gray, rowDepth0);
cv::Mat desc0 = f2d0->generateDescriptors(gray, kp0);
std::vector<cv::Point3f> pts3d0 = f2d0->generateKeypoints3D(sdata, kp0);
rtabmap.process(data0, Transform().getIdentity());
// Then second capture
cv::Mat rowRgb1 = getRGBRow(my ix for second capture) // 24 or 8 images wide
cv::Mat rowDepth1 = getDepthRow(my ix for second capture)
data0 = SensorData(rowRgb1, rowDepth1, cameraModels);
cv::cvtColor(rowRgb1, gray, cv::COLOR_BGR2GRAY);
ParametersMap detectParams1 = Parameters::getDefaultParameters(DETECTOR_TYPE);
uInsert(detectParams1, rtabmap::ParametersPair(rtabmap::Parameters::kKpMaxFeatures(), std::string("0")));
Feature2D *f2d1 = Feature2D::create(detectParams1);
std::vector<cv::KeyPoint> kp1 = f2d->generateKeypoints(gray, rowDepth1);
cv::Mat desc1 = f2d->generateDescriptors(gray, kp1);
std::vector<cv::Point3f> pts3d1 = f2d->generateKeypoints3D(sdata, kp1);
VWDictionary dictionary0;
std::list<int> wordIds0 = dictionary0.addNewWords(desc0, 1);
std::multimap<int, cv::Point3f> map0 = aggregateWordsAndP3f(wordIds0, pts3d0);
VWDictionary dictionary1;
std::list<int> wordIds1 = dictionary1.addNewWords(desc1, 1);
std::multimap<int, cv::Point3f> map1 = aggregateWordsAndP3f(wordIds1, pts3d1);
std::vector<int> matches;
std::vector<int> inliers;
Transform pose = util3d::estimateMotion3DTo3D(
uMultimapToMapUnique(map0),
uMultimapToMapUnique(map1),
4,
1.0, // Need to put this to 1.0 or 0.5 to get enough inliers
100, // 50, 100, 1000 gives simliar result
5,
0,
&matches, &inliers);
rtabmap.process(data1, pose);
std::multimap<int, cv::Point3f> aggregateWordsAndP3f(const std::list<int> & wordIds, const std::vector<cv::Point3f> & keypoints)
{
std::multimap<int, cv::Point3f> words;
std::vector<cv::Point3f>::const_iterator kpIter = keypoints.begin();
for(std::list<int>::const_iterator iter=wordIds.begin(); iter!=wordIds.end(); ++iter)
{
words.insert(std::pair<int, cv::Point3f >(*iter, *kpIter));
++kpIter;
}
return words;
}