ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    voxl 2 Failed to appy GPU delegate

    VOXL 2
    3
    4
    327
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Viswanadh CheguV
      Viswanadh Chegu
      last edited by

      Hi @modaltb, @Chad-Sweet

      We are facing issue with our own tflite model deployed on voxl2 for text detection bounding boxes.
      Here we are seeing "Failed to apply GPU deligate".
      From our understanding it seems that model will run on CPU in this case. So we are seeing high inference latency of 7-8 seconds, which we want to minimize by running it on GPU.
      Can you please help use with this issue.

      PS: Our tflite model size is 41MB.

      Attaching snapshot from voxl2 run for reference.

      Thanks

      voxl-tflite-server.png

      1 Reply Last reply Reply Quote 1
      • James StrawsonJ
        James Strawson ModalAI Team
        last edited by

        Models must contain only instructions from a limited set and be quantized properly to run on the GPU. Please refer to the TensorFlow docs for details:

        https://www.tensorflow.org/lite/performance/gpu

        Arjun Jain 0A Viswanadh CheguV 2 Replies Last reply Reply Quote 1
        • Arjun Jain 0A
          Arjun Jain 0 @James Strawson
          last edited by

          This post is deleted!
          1 Reply Last reply Reply Quote 0
          • Viswanadh CheguV
            Viswanadh Chegu @James Strawson
            last edited by

            @James-Strawson Thanks for reply, Will look into it and get back.

            1 Reply Last reply Reply Quote 0
            • First post
              Last post
            Powered by NodeBB | Contributors