ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Seeking Reference Code for MPA Integration with RTSP Video Streams for TFLite Server

    Ask your questions right here!
    3
    38
    2921
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Alex KushleyevA
      Alex Kushleyev ModalAI Team @anghung
      last edited by Alex Kushleyev

      @anghung , sure, I can re-build it. I will talk to the team about making this updated opencv build included into main SDK.

      Meanwhile, the only change was creating the directory, so you can just manually do this before installing the current deb (from the link above) on voxl2:

      mkdir -p /usr/lib/python3.6/dist-packages/
      
      A 3 Replies Last reply Reply Quote 0
      • A
        anghung @Alex Kushleyev
        last edited by anghung

        @Alex-Kushleyev
        yes, this is same as i install that package.
        05d5b1ba-7c83-43d2-a259-3da50d81c201-image.png

        1 Reply Last reply Reply Quote 0
        • A
          anghung @Alex Kushleyev
          last edited by

          @Alex-Kushleyev

          For the original topic, integrate RTSP stream for tflite server.

          I found that the problem is the sample code u provide, in metadata, format is RGB.

          However, in tflite, https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/src/inference_helper.cpp?ref_type=heads#L305 it only handle NV12 NV21 YUV422 RAW8.

          Thus, i need to make the RTSP stream NV12, than tflie can take the rest of the job.

          ! ... ! ... ! videoconvert ! video/x-raw,format=NV12 ! appsink
          

          Are there proper way to handle such format problem? Because NV12 got into tflite, it still need to resize to RGB format.

          Thanks!

          A 1 Reply Last reply Reply Quote 0
          • A
            anghung @anghung
            last edited by

            This post is deleted!
            1 Reply Last reply Reply Quote 0
            • A
              anghung @Alex Kushleyev
              last edited by

              @Alex-Kushleyev

              Hi, I try to use HW based decoder with HW resize.
              I also try SW decoding, cv2.VideoCapture(stream,cv2.CAP_GSTREAMER) doesn't workout.

              211180f6-5ea1-497f-879e-6c63c289f4ef-image.png

              But in protal, it cannot show anything.

              a0e90a4f-a02b-4638-801d-03c9097c2dbc-image.png

              The following is using gi to launch gstreamer and upscale 720 to 1080.

              c1186682-a7d0-43e4-81c1-9e4f366cfca7-image.png

              Alex KushleyevA 1 Reply Last reply Reply Quote 0
              • Alex KushleyevA
                Alex Kushleyev ModalAI Team @anghung
                last edited by Alex Kushleyev

                @anghung , I believe the issue is with telling appsink the correct format that you need.

                Looking at opencv source code (https://github.com/opencv/opencv/blob/4.x/modules/videoio/src/cap_gstreamer.cpp#L1000), you can see the supported formats.

                Please try inserting ! video/x-raw, format=NV12 ! between autovideoconvert and appsink.

                Let me know if that works!

                Edit: it seems you may have already tried it, if so, did it work? I could not understand the exact problem - do you need both NV12 and RGB?

                Alex

                A 1 Reply Last reply Reply Quote 0
                • A
                  anghung @Alex Kushleyev
                  last edited by

                  @Alex-Kushleyev

                  The code in voxl-tflite-server inference-helper, it only handle NV12 YUV422 NV21 and RAW8.

                  But when rtsp got in to opencv frame, original it is BGR format, than I use video/x-raw,format=NV12 videoconvert. So the tflite-server can handle the mpa.

                  However in tflite-server, the NV12 will be convert to RGB.

                  Is there something that can make this process more efficient?

                  Thanks!

                  Alex KushleyevA 1 Reply Last reply Reply Quote 0
                  • Alex KushleyevA
                    Alex Kushleyev ModalAI Team @anghung
                    last edited by

                    @anghung, if you would like to try, you can add a handler for RGB type image here : https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/src/inference_helper.cpp?ref_type=heads#L282 and publish RGB from the python script. It should be simple enough to try it, based on other examples of using YUV and RAW8 images in the same file.

                    However, then you will be sending more data via MPA (RGB is 3 bytes per pixel, YUV is 1.5 bytes per pixel). Sending more data via MPA should be more efficient than doing unnecessary conversions between YUV and RGB. So, if you can get the video capture to return RGB (not BGR), you could publish the RGB from the python script and use it directly with modified inference_helper.cpp.

                    Alex

                    A 1 Reply Last reply Reply Quote 0
                    • A
                      anghung @Alex Kushleyev
                      last edited by

                      @Alex-Kushleyev

                      Okay, I would try that.

                      Also I'm thinking about implement rtsp to mpa in cpp.

                      Is there some guidance?

                      Thanks!

                      Alex KushleyevA 1 Reply Last reply Reply Quote 0
                      • Alex KushleyevA
                        Alex Kushleyev ModalAI Team @anghung
                        last edited by Alex Kushleyev

                        @anghung , you want to implement rtsp directly in cpp, you don't need mpa. You can still use opencv to receive the rtsp stream . You should be able to use cv::VideoCapture class to get the rtsp frames decoded into your cpp application, just like you did in the python example. Then, you can use opencv to change the format of the image, if needed, or just request the correct format using the gstreamer pipeline when you create the VideoCapture instance. Finally, use the resulting image for processing.

                        You can start by modifying the voxl-tflite-server to subscribe to the rtsp stream (as an option, instead of mpa) or just make your new application based on voxl-tflite-server. This would be a nice feature, if you get it working and would like to contribute it back 🙂

                        Alex

                        A 2 Replies Last reply Reply Quote 0
                        • A
                          anghung @Alex Kushleyev
                          last edited by

                          @Alex-Kushleyev

                          Of course, I have been use cv::VideoCapture in my laptop for testing.

                          Because I am on a business trip, so I asked a colleague to make modifications in voxl-tflite-server and conduct some tests. However, the results did not match our expectations.

                          As you mention, this function https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/src/inference_helper.cpp?ref_type=heads#L282 should be able to handle RGB format directly. So we just add another condition in switch to prevent the function return false.

                          2df0259f-7fbf-4e38-8487-b07e7afda545-image.png

                          We send RGB format to MPA and the video is fine in Portal, but tflite cant handle it properly. The journal message show that the tflite service restart repearly.

                          34b022d3-cfd3-439f-adaf-82cf67b8a15d-image.png

                          The new voxl-tflite-server we built can handle NV12 correctly as the old one, regarding this situation, what methods should be used to provide more detailed information?

                          We are happy to contribute our successfully tested code. However, we have a limited number of VOXL2 boards available for development, as some are currently in the R&A process. We have already planned to purchase more VOXL2 boards to meet our development needs.

                          Thanks.

                          1 Reply Last reply Reply Quote 0
                          • A
                            anghung @Alex Kushleyev
                            last edited by

                            @Alex-Kushleyev

                            I have checked inference_helper and modify to handle RGB format successfully.

                            Screenshot 2024-07-01 113145.png

                            1 Reply Last reply Reply Quote 0
                            • R ravi referenced this topic on
                            • First post
                              Last post
                            Powered by NodeBB | Contributors