Voxl2 Stereo Camera Depth to PC transfromation
-
Quick question I was looking through VOXL2 DFS Server. Do you know what the camera_information or the formula to convert the pixel values of the stereo_disparity image to a pointcloud for further understanding?
Traditionally, I believe the formula should be something like. However I am not sure what the equivalent fx,fy,cx,cy would be in the stereo image.
for (int i = 0; i<depth_image.rows; i+=2) { for (int j = 0; j < depth_image.cols; j+=2) { geometry_msgs::Point32 pt; pt.z = depth_image.at<float>(i, j); pt.x = (double) ((j - cx) * pt.z / fx); pt.y = (float) ((i - cy) * pt.z / fy); cloud_.points.push_back(pt); } }
-
VOXL2 DFS server publishes the point cloud, so you can use that directly (it is published as x,y,z points)
If you would like to understand how the disparity to 3d conversion works, you can take a look at VOXL1 DFS server, which uses more of the OpenCV functions to compute DFS:
https://gitlab.com/voxl-public/voxl-sdk/services/apq8096-dfs-server/-/blob/master/server/dfs_processing_engine.cpp?ref_type=heads#L710 make a call to
cv::reprojectImageTo3D
and you can find the details here: https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html.cv::reprojectImageTo3D
needs matrix Q, which is constructed usingcv::StereoRectify
, you can see how it is used here : https://gitlab.com/voxl-public/voxl-sdk/services/apq8096-dfs-server/-/blob/master/server/dfs_processing_engine.cpp?ref_type=heads#L313 (including camera calibrations that are loaded from the config file).Also, i am not sure if in your question by
depth_image
you meantdisparity
(pixels) or actualdepth
in meters (or whatever units you are using). The opencv functions i linked convert the disparity in pixels to x,y,z in real world using the intrinsic and extrinsic camera calibration (the latter includes the baseline between the two cameras)Alex
-
Thanks for the information. I think I understand what is going on my end then.