ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    No detections when running custom YOLOv8 model on voxl-tflite-server

    VOXL 2
    2
    16
    466
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Zachary Lowell 0Z
      Zachary Lowell 0 ModalAI Team
      last edited by

      https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/include/tensor_data.h

      These are all the potential data types that voxl-tflite-server is expecting.

      S 1 Reply Last reply Reply Quote 0
      • S
        svempati @Zachary Lowell 0
        last edited by svempati

        I see, so my model is not supported by the voxl-tflite-server since it is float16 and the tflite server only supports 32 bit precision if we want to use floating point values. Am I understanding that correctly or am I missing something? Cause the default YOLOv5 model that is included in the VOXL 2 model (yolov5_float16_quant.tflite) is also of float 16 precision so I wonder how the functions in tensor_data.h handle that.

        One question, what command did you use to view these error logs from voxl-tflite server?

        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        
        1 Reply Last reply Reply Quote 0
        • Zachary Lowell 0Z
          Zachary Lowell 0 ModalAI Team
          last edited by

          @svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:

          I just ran voxl-tflite-server directly from the command line instead of in the background via systemd - aka run voxl-tflite-server directly on the command line. I would recommend NOT quantizing your model as the directions in the train yolov8 do not recommend that.

          Zach

          S 2 Replies Last reply Reply Quote 0
          • S
            svempati @Zachary Lowell 0
            last edited by

            @Zachary-Lowell-0 Got it, I will try that out and will let you know if I have any more questions. Thanks for your help!

            1 Reply Last reply Reply Quote 0
            • S
              svempati @Zachary Lowell 0
              last edited by svempati

              @Zachary-Lowell-0 I wanted to follow up with you again on this, and the issue seems to be the model conversion process from pytorch to tflite. To confirm this, I tried it with the default yolov8n.pt downloaded from ultralytics by entering this command from the gitlab repository:

              python export.py yolov8n.pt
              

              So that I create a new yolov8n_float16.tflite file. However, running this file on voxl-tflite-server shows this output before displaying Error in TensorData<float>: should not reach here:

              WARNING: Unknown model type provided! Defaulting post-process to object detection.
              INFO: Created TensorFlow Lite delegate for GPU.
              INFO: Initialized OpenCL-based API.
              INFO: Created 1 GPU delegate kernels.
              Successfully built interpreter
              
              ------VOXL TFLite Server------
              
               4 5 6
               4 5 6
              Connected to camera server
              
              

              I even tried running export.py on the voxl emulator to account for any differences in the cpu architecture between my computer and the VOXL (between X86 and ARM) but I still get the same error. Do you think there is anything I would be missing? Thank you!

              1 Reply Last reply Reply Quote 0
              • Zachary Lowell 0Z
                Zachary Lowell 0 ModalAI Team
                last edited by

                @svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:

                WARNING: Unknown model type provided! Defaulting post-process to object detection.

                @svempati I will try recreating this issue today and get back to you!

                S 1 Reply Last reply Reply Quote 0
                • S
                  svempati @Zachary Lowell 0
                  last edited by

                  @Zachary-Lowell-0 Just wanted to follow up to see if you were able to replicate this issue?

                  1 Reply Last reply Reply Quote 0
                  • Zachary Lowell 0Z
                    Zachary Lowell 0 ModalAI Team
                    last edited by

                    @svempati I was able to use an open source training set and then leverage the docs to make my own custom yolov8 model capable of running on the voxl2 - do you want to provide to me the actual dataset and I can try creating the model and training it on a TPU?

                    S 1 Reply Last reply Reply Quote 0
                    • S
                      svempati @Zachary Lowell 0
                      last edited by svempati

                      @Zachary-Lowell-0 I would first like to diagnose what is causing the yolov8 model to not work on the voxl 2 for me. Will it only work when you train the model/ export it to tflite on a TPU? I am getting the issue even if I train the yolov8 model on an open source dataset/ use the pretrained yolov8n.pt model downloaded from ultralytics. I want to make sure I can train a yolov8 model on an open source dataset from scratch works on the voxl, so that I can move on to using my custom dataset.

                      In case there is no other workaround then I should be able to send you the dataset I am using.

                      Thanks!

                      1 Reply Last reply Reply Quote 0
                      • Zachary Lowell 0Z
                        Zachary Lowell 0 ModalAI Team
                        last edited by

                        Let me try and train a custom model and run a loom on it in the next few days and get that over to you showing how I do it!

                        1 Reply Last reply Reply Quote 0
                        • Zachary Lowell 0Z
                          Zachary Lowell 0 ModalAI Team
                          last edited by

                          https://www.loom.com/share/bf52e252ab09444bb366f265a3f36dc5

                          Alright take a look at this loom please - it might help point you in the right direction in terms of training your model.

                          Zach

                          S 1 Reply Last reply Reply Quote 0
                          • S
                            svempati @Zachary Lowell 0
                            last edited by

                            @Zachary-Lowell-0 Thanks for sharing the video! I looked at it, and I pretty much did the same steps you did. It might be worth mentioning that I had to modify the Dockerfile because the one in the documentation was throwing a version mismatch error when installing the onnx part.

                            This was the original docker command

                            RUN pip3 install ultralytics tensorflow onnx "onnx2tf>1.17.5,<=1.22.3" tflite_support onnxruntime onnxslim "onnx_graphsurgeon>=0.3.26" "sng4onnx>=1.0.1" tf_keras
                            

                            I modified it to this

                            RUN pip3 install ultralytics tensorflow "onnx2tf>1.17.5,<=1.22.3" tflite_support onnxruntime onnxslim "onnx_graphsurgeon>=0.3.26" "sng4onnx>=1.0.1" tf_keras
                            RUN pip3 install onnx==1.20.1
                            

                            I don't think this should cause any issues, but could you confirm?

                            1 Reply Last reply Reply Quote 0
                            • First post
                              Last post
                            Powered by NodeBB | Contributors