ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Pytorch/tflite models

    Ask your questions right here!
    3
    5
    541
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      sarahl
      last edited by sarahl

      Hi, I've been able to run custom mobilenet tflite models via voxl-tflite-server, but when trying to run a Pytorch Yolov5s model ported to tflite, I get the error:

      Resolved reporter
      ERROR: Didn't find op for builtin opcode 'RESIZE_NEAREST_NEIGHBOR' version '3'

      ERROR: Registration failed.

      Failed to construct interpreter

      It seems like this op is only supported for newer version of TF. Will there be support for this in voxl-tflite-server soon? I noticed the docs say it needs TF<=2.2.3 for now, but I was wondering if this was going to be updated.

      I think the conversion produces an ONNX model as an intermediate step- is there a way to run an ONNX model on Voxl1? Alternately, is there a way to run Pytorch models with hardware acceleration? I've tried running CPU models on Pytorch via docker, but keep encountering issues or the board crashing.

      S 1 Reply Last reply Reply Quote 0
      • ?
        A Former User
        last edited by

        Hey @sarahl,

        On voxl, we are limited to TF v2.2.3 due to the need for a newer glibc/gcc toolchain. This is not likely to change, but you could use the docker strategy with a current TF version and newer gcc to get around this.

        We have not done any testing with Pytorch, but for ONNX support you can look into the Qualcomm Neural Processing SDK, as this supports Tensorflow, Caffe, and ONNX models.

        1 Reply Last reply Reply Quote 0
        • S
          sarahl
          last edited by modaltb

          Hi @Matt-Turi thanks for the information! On M0054/rb5, are you still also limited to tf v2.2.3? Also, can the same GPU/hardware acceleration used in the voxl-tflite-server be done at the docker level? Same question also regarding using the qualcomm neural processing sdk?

          1 Reply Last reply Reply Quote 0
          • ?
            A Former User
            last edited by

            On the qrb5165/M0054 platforms, we are using tensorflow v2.8.0.

            For hardware acceleration within a docker, this would require exposing the gpu/other accelerator drivers to the running docker, which I have not done before. However, the qualcomm neural processing sdk will have direct access to hardware accelerators if setup correctly.

            1 Reply Last reply Reply Quote 0
            • S
              sansoy @sarahl
              last edited by

              @sarahl Hi Sara, Could you share any info, like a colab notebook or whatever, on how you trained a tflite that works on the voxls as i've had zero luck even though my trained models work on my linux and mac systems. I created a yolov5 and quantized it and it inferered correctly. I also was able to convert a yolov8 model to tflite using ultralytics converter. I also retrained my own moblie ssd model and quantized it and they all infer correctly except on the voxl2.

              1 Reply Last reply Reply Quote 0
              • First post
                Last post
              Powered by NodeBB | Contributors