ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VOXL2 RTSP decoding fails

    VOXL 2
    3
    25
    1484
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • A
      Aaky @Alex Kushleyev
      last edited by

      @Alex-Kushleyev Thanks for all the information Alex. Actually I got it solved, the problem was with my VOXL Power cable coming from VOXLPM. After changing this wire the reboot stopped. I am wondering if there was some loose connection which was causing the problem when GPU acceleration started over VOXL. I was monitoring the current consumption of my system with voxl-inspect-battery and I noted, that before running ML model it was 0.9 Amps and after ML acceleration it was rising to 1.2 Amps.
      Few observations from voxl-inspect-cpu while ML acceleration started was temperature rising from 70 (No ML acceleration) to 80 (with ML acceleration). I am using YoloV5 over GPU and 50% GPU consumption I was able to see post acceleration. Maybe my video decoding is also hardware based with some GPU usage over there causing more GPU usage.

      Alex KushleyevA 1 Reply Last reply Reply Quote 0
      • Alex KushleyevA
        Alex Kushleyev ModalAI Team @Aaky
        last edited by

        @Aaky , I am glad you figured it out. Most of the time the unexpected reboots are due to some sort of power issue, that is why I originally asked about the source of power - sometimes if a Power supply is used to power VOXL2, if the power supply cannot provide sufficient current, the system will reset. But sometimes, cabling is the issue.

        Alex

        A 1 Reply Last reply Reply Quote 0
        • A
          Aaky @Alex Kushleyev
          last edited by

          @Alex-Kushleyev One quick information I wanted to have on this feature. I have this script running over my VOXL2 and it is in its default form always decoding and publishing the frames. I want it to be smart and subscriber based whereas if any mpa subscriber once gets connected to the pipe been published only then I should start to decode. The reason been if I am not listening to this stream, it keeps consuming CPU cycles. Any quick python based sample code would work. I am searching for the same but will save my time if you can point me to right location.

          A 1 Reply Last reply Reply Quote 0
          • A
            Aaky @Aaky
            last edited by

            @Eric-Katzfey Can you provide me some guidance on my above query?

            Alex KushleyevA 1 Reply Last reply Reply Quote 0
            • Alex KushleyevA
              Alex Kushleyev ModalAI Team @Aaky
              last edited by

              @Aaky, I see what you are asking.

              What is needed is the python binding to the C function int pipe_server_get_num_clients(int ch) , which is defined here

              pympa_create_pub already is set up to return the MPA channel number, so we would just need to use that channel number of checking the number of clients.

              I can implement this, but it will need to wait a few days. That is a good feature to have. If you would like to try implementing it yourself, you can add the bindings to :

              • https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-tools/-/blob/pympa-experimental/tools/pympa.cpp (add call to pipe_server_get_num_clients) in a similar way how other functions are called.
              • add the actual python wrapper to: https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-tools/-/blob/pympa-experimental/tools/python/pympa.py
              • then finally, when you call pympa_create_pub, save the pipe id and then pass it to check for clients.

              Alex

              1 Reply Last reply Reply Quote 0
              • First post
                Last post
              Powered by NodeBB | Contributors