ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Python Programmatic GStreamer Access for Hardware Encoded Acceleration and Low Latency

    Ask your questions right here!
    python voxl2
    4
    4
    51
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Tanner MetzmeierT
      Tanner Metzmeier
      last edited by

      Hello,

      I am using the VOXL2 with the Hadron 640R. I was originally using a standard RB5 with the Hadron 640R development kit and flashed with the FLIR Ubuntu 18 image. In order to record in Python on my original RB5 implementation I used a GStreamer Record Pipeline set up like this in Python:

          record_pipeline = (
              f"qtiqmmfsrc name=qmmf ! "
              f"video/x-h264,profile=high,width={self.width},height={self.height} ! h264parse ! "
              f"splitmuxsink max-size-time={segment_ns} muxer=mp4mux location=\"{location_pattern}\""
          )
      

      It appears on the VOXL2 this pipeline does not work, what is the pipeline that I should use to accomplish this, as I would prefer not to use RTSP. Our overall goal is a low latency stream in Python that we were able to accomplish on a standard RB5.

      Thanks in advance!

      J 1 Reply Last reply Reply Quote 0
      • J
        joseph.vale @Tanner Metzmeier
        last edited by

        Hi @Vinny this is a part of our Hadron interface project. We're utilizing the commands straight from FLIR but it appears to not be working with VOXL2. We've got the camera servers up and running via VOXL2 commands. However there seems to be a slight variation in the commands and some nuance that we're missing.

        Thank you!
        Joseph Porter

        VinnyV 1 Reply Last reply Reply Quote 0
        • VinnyV
          Vinny ModalAI Team @joseph.vale
          last edited by

          Hi @joseph-vale
          This is not my wheel house, but maybe @Alex-Kushleyev can give you some pointers.
          Thanks!
          Vinny

          Alex KushleyevA 1 Reply Last reply Reply Quote 0
          • Alex KushleyevA
            Alex Kushleyev ModalAI Team @Vinny
            last edited by Alex Kushleyev

            @joseph-vale ,

            You did not provide the actual error that you are seeing, however I could try to guess what it is (even if not, the details below should probably help you anyway). The default build of voxl-opencv package does not have python3 support. So if you are using gstreamer with opencv in python and that is the error, you should install the voxl-opencv package that I built with python3 support

            • https://storage.googleapis.com/modalai_public/temp/voxl-opencv_4.5.5-3_arm64.deb
            • source : https://gitlab.com/voxl-public/voxl-sdk/third-party/voxl-opencv/-/tree/add-python3-bindings

            Below, i will assume that you want to grab images from Boson part of the Hadron, however similar approach should apply to the RGB camera in Hadron.

            First you should test ability to grab images without python. You may need to replace your camera # depending on what camera id your Boson is.

            IMPORTANT: make sure that voxl-camear-server is not running while you are trying to use gstreamer.

            systemctl stop voxl-camera-server
            

            Tip: you can actually stream video using X forwarding with ssh. This should stream live Boson feed from voxl2 to your linux machine:

            ssh -Y username@<voxl-ip>
            gst-launch-1.0 qtiqmmfsrc camera=0 ! "video/x-raw,width=640,height=512,framerate=30/1" ! videoconvert ! autovideosink
            

            display image directly in terminal as ascii:

            gst-launch-1.0 qtiqmmfsrc camera=0 ! "video/x-raw,width=640,height=512,framerate=30/1" ! autovideoconvert ! aasink
            

            and then finally, a python script that grabs h264 video from qtiqmmfsrc, decodes it and returns frames to python:

            import time
            import cv2
            
            #get RGB (BGR?) directly
            #stream='gst-launch-1.0 qtiqmmfsrc camera=0 ! video/x-raw, width=640,height=512,framerate=30/1 ! autovideoconvert ! appsink'
            
            #get h264 -> decode -> BGR
            stream='gst-launch-1.0 qtiqmmfsrc camera=0 ! video/x-h264,format=NV12,profile=high,width=640,height=512,framerate=30/1 ! h264parse ! qtivdec ! qtivtransform ! video/x-raw,format=BGR,width=640,height=512 ! autovideoconvert ! appsink'
            
            print(stream)
            vcap = cv2.VideoCapture(stream,cv2.CAP_GSTREAMER)
            
            frame_cntr = 0
            while(1):
                ret, frame = vcap.read()
                if ret:
                    frame_cntr += 1
                    print('got frame %d with dims ' % frame_cntr, frame.shape)
            

            Hopefully, that works for you.

            Final recommendation - if you use qtiqmmfsrc this way, the Boson data is processed in the Qualcomm ISP and unless your have a special tuning file for Boson, the processed output will have degraded quality. Boson, by default, outputs post AGC 8-bit image which is already processed and does not need to be further processed by the ISP. I am not sure whether you can get RAW8 data from qtiqmmfsrc (unmodified data from Boson).

            We handle the above issue in voxl-camera-server by working with the RAW8 directly. We also recently started experimenting with 14bit pre-AGC data from Boson, which would need some processing before it is usable (if you are interested in that, i can share some more information).

            Finally, if you would like to use voxl-camera-server, which is what we recommend and support, there is also a way to get encoded h264/h265 data into python (using our experimental pympa (python MPA bindings)). That is a topic for a discussion in another post, if you are interested..

            Alex

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Powered by NodeBB | Contributors