ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Bad DFS disparity

    VOXL 2
    3
    13
    795
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • ?
      A Former User @afdrus
      last edited by

      @afdrus

      Can I ask what model of Intel Realsense you're using?

      Thomas

      A 1 Reply Last reply Reply Quote 0
      • A
        afdrus @Guest
        last edited by

        @thomas Sure, the model that I am using is the D455.

        I was just trying to understand if the disparity that I am getting from voxl-dfs-server (now) is the best that I can get (or at least not too far from the best that I can get). This is why I asked if you could post a good disparity taken with one of your stereo setup. The one in the website https://docs.modalai.com/voxl-dfs-server-0_9/ looks very similar to what I am getting ATM: good disparity along the edges of the objects, but no information in uniform areas.

        Thanks again for the reply!

        ? 1 Reply Last reply Reply Quote 0
        • ?
          A Former User @afdrus
          last edited by

          @afdrus

          Yeah, for DFS we actually invoke a lower-level library to compute it for us so it's kind of a black box in the sense that we don't have a lot of control over the algorithmic process. If you tried out DFS Server on SDK 0.9 (like in the docs page) and the result wasn't good there isn't a whole lot I can say. Definitely re-check your camera calibration, camera mounting, and focus - these things all play a big role in the result.

          Sorry this isn't that informative of an answer, hope it at least helps some!

          Thomas Patton

          ModeratorM 1 Reply Last reply Reply Quote 0
          • ModeratorM
            Moderator ModalAI Team @Guest
            last edited by

            @thomas depth from stereo works by correlating features across two image sensors. Against a uniform surface, such as a white wall, it will never generate depth. You need an active sensor, like TOF or LiDar, to measure depth of a flat, white wall

            A 1 Reply Last reply Reply Quote 0
            • A
              afdrus @Moderator
              last edited by

              @Moderator thank you for your response!
              I agree with what you suggest, but we are not discussing about the computation of the disparity in an edge case scenario of a white wall. I am just confused, for instance take the example below: why the voxl-disparity is so sparse (first top picture) with respect to the other done via opencv (second bottom picture)?

              top-disparity
              disparity_modalai.png

              bottom-disparity
              new_implementation_opencv.png

              In both cases I am using the same grey images, with the same extrinsics/intrinsics parameters, and yet the disparity from opencv is able to fill in the areas within the edges, whereas voxl-disparity is not. Are there any parameters that I can tune to get the top-disparity to look more similar to the bottom-disparity?

              ? 1 Reply Last reply Reply Quote 0
              • ?
                A Former User @afdrus
                last edited by

                @afdrus

                voxl-dfs-server has a couple adjustable parameters, you can check them out in /etc/modalai/voxl-dfs-server.conf. But yeah, not everything is tweakable as we're using a hardware-optimized API to compute these images. If you're happier with the OpenCV ones, you could always just fork voxl-dfs-server and make a small change to the implementation to have it computed in OpenCV instead of with the usual libmodal-cv call. I can help you out with this if you'd like.

                Hope this helps!

                Thomas Patton

                A 1 Reply Last reply Reply Quote 0
                • A
                  afdrus @Guest
                  last edited by afdrus

                  @thomas Thank you for your response! That would be great, could you please show me a bit more precisely where can I start to edit the code to make this adjustments?

                  ? 1 Reply Last reply Reply Quote 0
                  • ?
                    A Former User @afdrus
                    last edited by

                    @afdrus

                    Assuming you're on VOXL2, start by forking the DFS Server repo here. Then take a look at the _image_callback function here. This defines what happens when each image comes in. These lines extract that into a left/right OpenCV image which you can then use to compute DFS. Then you should push out the results over some pipe using a similar format to what we do, consult the libmodal-pipe documentation if you need any help with this.

                    Hope this helps, let me know if you have any other questions!

                    Thomas Patton

                    A 1 Reply Last reply Reply Quote 0
                    • A
                      afdrus @Guest
                      last edited by

                      @thomas Thanks a lot for the support! I'll dig into it and go back to you in case something is not clear!

                      ? 1 Reply Last reply Reply Quote 0
                      • ?
                        A Former User @afdrus
                        last edited by

                        @afdrus

                        Sounds good, let me know if I can help in any way!

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Powered by NodeBB | Contributors