ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    AR0144 RGB output on VOXL2

    Image Sensors
    3
    40
    1366
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Alex KushleyevA
      Alex Kushleyev ModalAI Team @Jordyn Heil
      last edited by

      @Jordyn-Heil , i have seen this issue before. This is because you are compiling the camera server app against one version of libmodal-pipe library, but a different version is running on voxl2. There was an api change in the header file causing this issue. The solution is to use latest libmodal pipe (from dev) while cross compiling and running on voxl. You dont need to update the whole sdk, just update that lib.

      Alex

      J 1 Reply Last reply Reply Quote 0
      • J
        Jordyn Heil @Alex Kushleyev
        last edited by

        @Alex-Kushleyev, thank you! That solved my issue!

        I can successfully stream the grey MISP streams over RTSP from my color AR0144s, but I am unable to stream colored frames over RTSP. Is the processing pipeline to do this implemented, and if so, what modifications need to be made to send and access these streams over RTSP?

        Also, I believe you mentioned earlier that the VOXL GPU can handle panoramic stitching between camera streams? I am currently using the CPU to do panoramic stitching, but there is too much lag for my application, so I would like to try and use a compute shader to offload the work to the GPU. What is the best way to do this on the VOXL2?

        Alex KushleyevA 1 Reply Last reply Reply Quote 0
        • Alex KushleyevA
          Alex Kushleyev ModalAI Team @Jordyn Heil
          last edited by

          @Jordyn-Heil , yes i need to add the part that feeds the frames into the video encoder for the ar0144 color use case. it's not difficult, i will do it this week.

          Regarding GPU usage, I have only done OpenCL programming on VOXL2, but OpenGL is also supported, I can probably find some resources to help with that. Which one of the two do you prefer? OpenCL would probably be easier (for me to help you) since we have done much more with that already.

          Alex

          J 2 Replies Last reply Reply Quote 0
          • J
            Jordyn Heil @Alex Kushleyev
            last edited by

            @Alex-Kushleyev, great, thanks for doing that!

            OpenCL should work just fine!

            Alex KushleyevA 1 Reply Last reply Reply Quote 0
            • Alex KushleyevA
              Alex Kushleyev ModalAI Team @Jordyn Heil
              last edited by Alex Kushleyev

              @Jordyn-Heil , sorry for the delay.

              I will provide an initial example of getting 4 AR0144 streams into a standalone app and doing something with all of them on the GPU using OpenCL. This should be a nice example / starting point for multi-camera image fusion.

              Initially I will just use our standard mpa approach to get the image buffers from camera server to the standalone app. After that i will also try to do the same using the ION buffer support (which is already in camera server) which will allow us to map the buffers to GPU without doing any copy operations or sending image data over pipes. using ION buffer sharing will allow to further reduce the cpu and memory bandwidth usage, but it's still a bit of experimental feature and will need some more testing.

              Please give me a few more days.

              Alex

              J 1 Reply Last reply Reply Quote 0
              • J
                Jordyn Heil @Alex Kushleyev
                last edited by

                @Alex-Kushleyev, no worries! I appreciate your help on this!

                In the meantime, would you be able to share the CAD file for the M0173 breakout board? I was not able to find it anywhere.

                VinnyV 1 Reply Last reply Reply Quote 0
                • VinnyV
                  Vinny ModalAI Team @Jordyn Heil
                  last edited by

                  Hi @Jordyn-Heil
                  We try to keep this regularly updated, but sometimes miss some... in this case, M0173 is there 🙂 https://docs.modalai.com/pcb-catalog/

                  1 Reply Last reply Reply Quote 1
                  • J
                    Jordyn Heil @Alex Kushleyev
                    last edited by

                    @Alex-Kushleyev, were you able to update the dev build of voxl-camera-server to encode the ar0144 color streams?

                    Alex KushleyevA 1 Reply Last reply Reply Quote 0
                    • Alex KushleyevA
                      Alex Kushleyev ModalAI Team @Jordyn Heil
                      last edited by

                      @Jordyn-Heil ,

                      Sorry for the delay.

                      In order to test the AR0144 Color RTSP stream, you can use voxl-streamer. I just pushed a small changed that fixed the pipe description for AR0144 color output from YUV to RGB. The current implementation publishes RGB, but the mpa pipe description was set to YUV, so voxl-streamer did not work.

                      So right now, i can run voxl-streamer:

                      voxl2:/$ voxl-streamer -i tracking_front_misp_color
                      Waiting for pipe tracking_front_misp_color to appear
                      Found Pipe
                      detected following stats from pipe:
                      w: 1280 h: 800 fps: 30 format: RGB
                      Stream available at rtsp://127.0.0.1:8900/live
                      A new client rtsp://<my ip>:36952(null) has connected, total clients: 1
                      Camera server Connected
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      gbm_create_device(156): Info: backend name is: msm_drm
                      rtsp client disconnected, total clients: 0
                      no more rtsp clients, closing source pipe intentionally
                      Removed 1 sessions
                      

                      voxl-streamer will receive the RGB image and use the hardware encoder to encode the images into the video stream.

                      The corresponding changes are on voxl-camera-server dev branch of voxl-camera-server.

                      Will that work for you, or do you need the voxl-camera-server to publish the encoded stream? In that case, since voxl-camera-server only supports encoding YUVs, I would need to conver to YUV then feed the image into the encoder.

                      I will follow up regarding GPU stuff in the next post.

                      Alex

                      Alex KushleyevA 1 Reply Last reply Reply Quote 0
                      • Alex KushleyevA
                        Alex Kushleyev ModalAI Team @Alex Kushleyev
                        last edited by Alex Kushleyev

                        @Jordyn-Heil ,

                        I am working on some examples of how to use the GPU with images received from the camera server via MPA (raw bayer, yuv, rgb).

                        I just wanted to clarify your use case.

                        • Currently, the camera server receives RAW8 bayer image from AR0144 camera (it can be RAW10/12 but it is not yet working reliably, different story)
                        • the camera server then uses CPU to de-bayer the image into RGB
                        • RGB is then sent out via MPA and can be viewed by voxl-portal (which supports RGB) or can be encoded via voxl-streamer, which also supports RGB input.

                        Now, in order to do any image stitching, the following would need to be done in the client application

                        • subscribe the the multiple camera streams (whether bayer or RGB)
                        • load camera calibration (intrinsic) for each camera
                        • load extrinsic parameters for each camera
                        • load LSC (lens shading correction) tables for each camera (or could be the same for all, close enough)
                        • for simplicity, lets assume that there is one large output buffer is allocated for all cameras to be pasted into (panorama image). and when individual images come in, the job of the app is to do the following:
                          • apply LSC corrections to the whole image
                          • (optional) perform white balance corrections (to be consistent across all cameras) (would need some analysis across all images)
                          • undistort the image according to the fisheye calibration params
                          • project the image according to the extrinsics calibration into the stitched image

                        this would ignore the time synchronization aspect for now and basically as each image comes in, the stitched image is updated.

                        Also, for your algorithm development, do you work with RGB image or YUV?

                        I am planning to share something simple first, which will subscribe to the 3 or 4 AR0144 cameras, load some intrinsics and extrinsics and overlay the images in a larger image (without explicitly calibrating the extrinsics). Then we can go from there.

                        Alex

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Powered by NodeBB | Contributors