ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    voxl-tflite-server: "FATAL: Unsupported model provided!!"

    Ask your questions right here!
    2
    9
    558
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • C
      colerose
      last edited by

      I am trying to create a custom version of the voxl-tflite-server using a tflite file of the literature version of the yolo-v4-tiny model. However, when I change the code as well as the the voxl-tflite-server.conf file to include my new model I get the error "FATAL: Unsupported model provided!!" when trying to run the server. I can't find where this message is printed in the code. How can I turn this error off?

      1 Reply Last reply Reply Quote 0
      • ?
        A Former User
        last edited by

        This error is printed at line 78 of threads.cpp. If you would like to run your own model, it is much simpler to just change the

        tflite_settings->model_name
        

        and

        tflite_settings->labels_file_name 
        

        parameters in models.cpp to match the absolute paths on voxl where these files are located (typically /usr/bin/dnn/ or in the /data/ partition if the file is larger).

        1 Reply Last reply Reply Quote 0
        • C
          colerose
          last edited by

          @Matt-Turi thanks Matt. I ended up getting a segfault with it. I believe it was too memory intensive because it wasn't supported by the GPU. Is there a chance that tiny-yolo could be supported by ModalAI anytime soon?

          1 Reply Last reply Reply Quote 0
          • ?
            A Former User
            last edited by

            We've done some testing with darknet and a few tflite converted yolo models, but saw extremely poor inference times and performance from both. Is there any reason you are looking to use the yolo architecture specifically over mobilenet or some other native tensorflow model?

            1 Reply Last reply Reply Quote 0
            • C
              colerose
              last edited by

              I have some yolo weights on some custom data that I collected for yolo that I wanted to test with voxl-tflite-server. Additionally, I noticed that in there are some classes missing in the coco_labels.txt file included with the code for voxl-tflite-server in master. Some classes such as 'desk' are replaced with question marks:

              0  person
              1  bicycle
              2  car
              3  motorcycle
              4  airplane
              5  bus
              6  train
              7  truck
              8  boat
              9  traffic light
              10  fire hydrant
              11  ???-11
              12  stop sign
              13  parking meter
              14  bench
              15  bird
              16  cat
              17  dog
              18  horse
              19  sheep
              20  cow
              21  elephant
              22  bear
              23  zebra
              24  giraffe
              25  ???
              26  backpack
              27  umbrella
              28  ???-28
              29  ???-29
              30  handbag
              31  tie
              32  suitcase
              33  frisbee
              34  skis
              35  snowboard
              36  sports ball
              37  kite
              38  baseball bat
              39  baseball glove
              40  skateboard
              41  surfboard
              42  tennis racket
              43  bottle
              44  ???-44
              45  wine glass
              46  cup
              47  fork
              48  knife
              49  spoon
              50  bowl
              51  banana
              52  apple
              53  sandwich
              54  orange
              55  broccoli
              56  carrot
              57  hot dog
              58  pizza
              59  donut
              60  cake
              61  chair
              62  couch
              63  potted plant
              64  bed
              65  ???-65
              66  dining table
              67  ???-67
              68  ???-68
              69  toilet
              70  ???-70
              71  tv
              72  laptop
              73  mouse
              74  remote
              75  keyboard
              76  cell phone
              77  microwave
              78  oven
              79  toaster
              80  sink
              81  refrigerator
              82  ???-82
              83  book
              84  clock
              85  vase
              86  scissors
              87  teddy bear
              88  hair drier
              89  toothbrush
              
              1 Reply Last reply Reply Quote 0
              • C
                colerose
                last edited by

                @Matt-Turi did you try the full fledged version of yolo or was it tiny-yolo?

                1 Reply Last reply Reply Quote 0
                • ?
                  A Former User
                  last edited by A Former User

                  @colerose Looks like that labels file is a bit outdated - I will see to updating it.

                  For your other question - in the past, I tried with regular and tiny yolo implementations. With a modified darknet framework (opencl backend instead of cuda), the inference times for tiny-yolo were around 30 seconds per image, and full yolo was upwards of a minute per image. I also tested with the tflite-converted models, but saw similar performance results as well as various inconsistencies / unsupported ops due to the conversion (likely why you are getting segfaults with your model).

                  For reference, any of the object detection models from the tf1 zoo as well as a few from the tf2 zoo will integrate seamlessly and can support our gpu acceleration as seen with the mobilenetv2 model that is default.

                  1 Reply Last reply Reply Quote 0
                  • C
                    colerose
                    last edited by

                    @Matt-Turi this is great to know, thanks Matt! 🙂 Just curious, any reason why mobilenetv2 was chosen from the tf1/tf2 zoo when it seems that there are faster and more accurate models available?

                    1 Reply Last reply Reply Quote 0
                    • ?
                      A Former User
                      last edited by

                      @colerose we selected mobilenetv2 because of its exceptional performance on embedded devices - currently, inference time in the voxl-tflite-server is ~22ms per frame with very high precision. Also, the mobilenet family is fairly easy to retrain with a custom dataset!

                      1 Reply Last reply Reply Quote 1
                      • First post
                        Last post
                      Powered by NodeBB | Contributors