ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    No detections when running custom YOLOv8 model on voxl-tflite-server

    VOXL 2
    2
    13
    345
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • S
      svempati @Zachary Lowell 0
      last edited by

      Hi @Zachary-Lowell-0, Yes I am confirming that I followed the instructions in that gitlab repository.
      Here is the tflite file and labels file: https://drive.google.com/drive/folders/1kyjanabVSP_pH_jsQyjQG9z6hFYZ_iij?usp=drive_link

      1 Reply Last reply Reply Quote 0
      • Zachary Lowell 0Z
        Zachary Lowell 0 ModalAI Team
        last edited by Zachary Lowell 0

        @svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:

        "labels": "/usr/bin/dnn/yolov8_labels.txt",

        So running your model we get the following errors via voxl-tflite-server:

        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        Error in TensorData<float>: should not reach here
        

        Which means there is an issue in your model itself and most likely means you ran into an issue during the build process. Specifically this means that is a model issue, not a labels file issue. Your .tflite model has an output tensor with a different data type than what voxl-tflite-server expects

        The code itself shows the error when you hit this case statement:

        // Gets the uint8_t tensor data pointer
        template <>
        inline uint8_t *TensorData(TfLiteTensor *tensor, int batch_index)
        {
        int nelems = 1;

        for (int i = 1; i < tensor->dims->size; i++)
        {
            nelems *= tensor->dims->data[i];
        }
        
        switch (tensor->type)
        {
        case kTfLiteUInt8:
            return tensor->data.uint8 + nelems * batch_index;
        default:
            fprintf(stderr, "Error in %s: should not reach here\n",
                    __FUNCTION__);
        }
        
        return nullptr;
        

        }

        Which means the output tensor doesnt match the expected output in this header file. Please look into your model.

        zach

        1 Reply Last reply Reply Quote 0
        • Zachary Lowell 0Z
          Zachary Lowell 0 ModalAI Team
          last edited by

          https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/include/tensor_data.h

          These are all the potential data types that voxl-tflite-server is expecting.

          S 1 Reply Last reply Reply Quote 0
          • S
            svempati @Zachary Lowell 0
            last edited by svempati

            I see, so my model is not supported by the voxl-tflite-server since it is float16 and the tflite server only supports 32 bit precision if we want to use floating point values. Am I understanding that correctly or am I missing something? Cause the default YOLOv5 model that is included in the VOXL 2 model (yolov5_float16_quant.tflite) is also of float 16 precision so I wonder how the functions in tensor_data.h handle that.

            One question, what command did you use to view these error logs from voxl-tflite server?

            Error in TensorData<float>: should not reach here
            Error in TensorData<float>: should not reach here
            Error in TensorData<float>: should not reach here
            Error in TensorData<float>: should not reach here
            Error in TensorData<float>: should not reach here
            Error in TensorData<float>: should not reach here
            Error in TensorData<float>: should not reach here
            Error in TensorData<float>: should not reach here
            
            1 Reply Last reply Reply Quote 0
            • Zachary Lowell 0Z
              Zachary Lowell 0 ModalAI Team
              last edited by

              @svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:

              I just ran voxl-tflite-server directly from the command line instead of in the background via systemd - aka run voxl-tflite-server directly on the command line. I would recommend NOT quantizing your model as the directions in the train yolov8 do not recommend that.

              Zach

              S 2 Replies Last reply Reply Quote 0
              • S
                svempati @Zachary Lowell 0
                last edited by

                @Zachary-Lowell-0 Got it, I will try that out and will let you know if I have any more questions. Thanks for your help!

                1 Reply Last reply Reply Quote 0
                • S
                  svempati @Zachary Lowell 0
                  last edited by svempati

                  @Zachary-Lowell-0 I wanted to follow up with you again on this, and the issue seems to be the model conversion process from pytorch to tflite. To confirm this, I tried it with the default yolov8n.pt downloaded from ultralytics by entering this command from the gitlab repository:

                  python export.py yolov8n.pt
                  

                  So that I create a new yolov8n_float16.tflite file. However, running this file on voxl-tflite-server shows this output before displaying Error in TensorData<float>: should not reach here:

                  WARNING: Unknown model type provided! Defaulting post-process to object detection.
                  INFO: Created TensorFlow Lite delegate for GPU.
                  INFO: Initialized OpenCL-based API.
                  INFO: Created 1 GPU delegate kernels.
                  Successfully built interpreter
                  
                  ------VOXL TFLite Server------
                  
                   4 5 6
                   4 5 6
                  Connected to camera server
                  
                  

                  I even tried running export.py on the voxl emulator to account for any differences in the cpu architecture between my computer and the VOXL (between X86 and ARM) but I still get the same error. Do you think there is anything I would be missing? Thank you!

                  1 Reply Last reply Reply Quote 0
                  • Zachary Lowell 0Z
                    Zachary Lowell 0 ModalAI Team
                    last edited by

                    @svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:

                    WARNING: Unknown model type provided! Defaulting post-process to object detection.

                    @svempati I will try recreating this issue today and get back to you!

                    S 1 Reply Last reply Reply Quote 0
                    • S
                      svempati @Zachary Lowell 0
                      last edited by

                      @Zachary-Lowell-0 Just wanted to follow up to see if you were able to replicate this issue?

                      1 Reply Last reply Reply Quote 0
                      • Zachary Lowell 0Z
                        Zachary Lowell 0 ModalAI Team
                        last edited by

                        @svempati I was able to use an open source training set and then leverage the docs to make my own custom yolov8 model capable of running on the voxl2 - do you want to provide to me the actual dataset and I can try creating the model and training it on a TPU?

                        S 1 Reply Last reply Reply Quote 0
                        • S
                          svempati @Zachary Lowell 0
                          last edited by svempati

                          @Zachary-Lowell-0 I would first like to diagnose what is causing the yolov8 model to not work on the voxl 2 for me. Will it only work when you train the model/ export it to tflite on a TPU? I am getting the issue even if I train the yolov8 model on an open source dataset/ use the pretrained yolov8n.pt model downloaded from ultralytics. I want to make sure I can train a yolov8 model on an open source dataset from scratch works on the voxl, so that I can move on to using my custom dataset.

                          In case there is no other workaround then I should be able to send you the dataset I am using.

                          Thanks!

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Powered by NodeBB | Contributors