ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Voxl2 Stereo Camera Depth to PC transfromation

    Modal Pipe Architecture (MPA)
    2
    3
    183
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Jeffrey MaoJ
      Jeffrey Mao
      last edited by

      Quick question I was looking through VOXL2 DFS Server. Do you know what the camera_information or the formula to convert the pixel values of the stereo_disparity image to a pointcloud for further understanding?

      Traditionally, I believe the formula should be something like. However I am not sure what the equivalent fx,fy,cx,cy would be in the stereo image.

      for (int i = 0; i<depth_image.rows; i+=2)
      {
          for (int j = 0; j < depth_image.cols; j+=2)
          {
            geometry_msgs::Point32 pt;
            pt.z = depth_image.at<float>(i, j); 
            pt.x = (double) ((j - cx) * pt.z / fx);
            pt.y = (float) ((i - cy) * pt.z / fy);
            cloud_.points.push_back(pt);
          }
      }
      
      Alex KushleyevA 1 Reply Last reply Reply Quote 0
      • Alex KushleyevA
        Alex Kushleyev ModalAI Team @Jeffrey Mao
        last edited by Alex Kushleyev

        @Jeffrey-Mao ,

        VOXL2 DFS server publishes the point cloud, so you can use that directly (it is published as x,y,z points)

        If you would like to understand how the disparity to 3d conversion works, you can take a look at VOXL1 DFS server, which uses more of the OpenCV functions to compute DFS:

        https://gitlab.com/voxl-public/voxl-sdk/services/apq8096-dfs-server/-/blob/master/server/dfs_processing_engine.cpp?ref_type=heads#L710 make a call to cv::reprojectImageTo3D and you can find the details here: https://docs.opencv.org/4.x/d9/d0c/group__calib3d.html.

        cv::reprojectImageTo3D needs matrix Q, which is constructed using cv::StereoRectify , you can see how it is used here : https://gitlab.com/voxl-public/voxl-sdk/services/apq8096-dfs-server/-/blob/master/server/dfs_processing_engine.cpp?ref_type=heads#L313 (including camera calibrations that are loaded from the config file).

        Also, i am not sure if in your question by depth_image you meant disparity (pixels) or actual depth in meters (or whatever units you are using). The opencv functions i linked convert the disparity in pixels to x,y,z in real world using the intrinsic and extrinsic camera calibration (the latter includes the baseline between the two cameras)

        Alex

        Jeffrey MaoJ 1 Reply Last reply Reply Quote 0
        • Jeffrey MaoJ
          Jeffrey Mao @Alex Kushleyev
          last edited by

          @Alex-Kushleyev

          Thanks for the information. I think I understand what is going on my end then.

          1 Reply Last reply Reply Quote 0
          • First post
            Last post
          Powered by NodeBB | Contributors