ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    VOXL2 tflite custom models

    VOXL 2
    2
    2
    280
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      Jgaucin
      last edited by

      Hello!

      TL;DR – Does the VOXL2 have an EdgeTPU included in the hardware or is it just the name given to the DeepLab v3 model below? Also, how can I access the models in the VOXL2 in order to fine-tune or optimize for my specific needs?:

      edgetpu_deeplab_321_os32_float16_quant.tflite

      VOXL2_dnn.png

      I wanted to make this post to ask about this and see if anyone has had the same experience and has any resources to accomplish this task. I have seen some forum posts of this being successfully done on VOXL however not any forum posts on the VOXL2 (qrb5165).

      I found this article which has the performance of the models and a link to the segmentation model on Github. That Github repository PINTO_Model_Zoo also has a link to the EdgeTPU-DeepLab models trained on the Cityscapes dataset.

      link text

      link text

      My understanding is that I can use a Deeplab v3 segmentation model compatible with TensorFlow Lite, which then needs to be quantized for best performance and to be compatible with the nnapi used by VOXL2. I am unsure of the role edgetpu plays in this process. I hope the documentation over this is made public soon to help in the start of this process.

      Thank you as always!

      ModeratorM 1 Reply Last reply Reply Quote 0
      • ModeratorM
        Moderator ModalAI Team @Jgaucin
        last edited by

        @Jgaucin voxl-tflite-server is just a wrapper for standard TensorFlow Lite that has been compiled with the proper configurations to take advantage of QRB5165.

        voxl-tflite-server code

        So, if you can achieve what you are trying to do using TensorFlow Lite on the desktop, then you should be able to bring over to VOXL 2 in a straightforward manner.

        The inference_worker function here is where the models are processed

        1 Reply Last reply Reply Quote 1
        • First post
          Last post
        Powered by NodeBB | Contributors