No detections when running custom YOLOv8 model on voxl-tflite-server
-
Hi @Zachary-Lowell-0, Yes I am confirming that I followed the instructions in that gitlab repository.
Here is the tflite file and labels file: https://drive.google.com/drive/folders/1kyjanabVSP_pH_jsQyjQG9z6hFYZ_iij?usp=drive_link -
@svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:
"labels": "/usr/bin/dnn/yolov8_labels.txt",
So running your model we get the following errors via voxl-tflite-server:
Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach hereWhich means there is an issue in your model itself and most likely means you ran into an issue during the build process. Specifically this means that is a model issue, not a labels file issue. Your .tflite model has an output tensor with a different data type than what voxl-tflite-server expects
The code itself shows the error when you hit this case statement:
// Gets the uint8_t tensor data pointer
template <>
inline uint8_t *TensorData(TfLiteTensor *tensor, int batch_index)
{
int nelems = 1;for (int i = 1; i < tensor->dims->size; i++) { nelems *= tensor->dims->data[i]; } switch (tensor->type) { case kTfLiteUInt8: return tensor->data.uint8 + nelems * batch_index; default: fprintf(stderr, "Error in %s: should not reach here\n", __FUNCTION__); } return nullptr;}
Which means the output tensor doesnt match the expected output in this header file. Please look into your model.
zach
-
These are all the potential data types that voxl-tflite-server is expecting.
-
I see, so my model is not supported by the voxl-tflite-server since it is float16 and the tflite server only supports 32 bit precision if we want to use floating point values. Am I understanding that correctly or am I missing something? Cause the default YOLOv5 model that is included in the VOXL 2 model (
yolov5_float16_quant.tflite) is also of float 16 precision so I wonder how the functions intensor_data.hhandle that.One question, what command did you use to view these error logs from voxl-tflite server?
Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here -
@svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:
I just ran voxl-tflite-server directly from the command line instead of in the background via systemd - aka run
voxl-tflite-serverdirectly on the command line. I would recommend NOT quantizing your model as the directions in the train yolov8 do not recommend that.Zach
-
@Zachary-Lowell-0 Got it, I will try that out and will let you know if I have any more questions. Thanks for your help!
-
@Zachary-Lowell-0 I wanted to follow up with you again on this, and the issue seems to be the model conversion process from pytorch to tflite. To confirm this, I tried it with the default
yolov8n.ptdownloaded from ultralytics by entering this command from the gitlab repository:python export.py yolov8n.ptSo that I create a new
yolov8n_float16.tflitefile. However, running this file onvoxl-tflite-servershows this output before displayingError in TensorData<float>: should not reach here:WARNING: Unknown model type provided! Defaulting post-process to object detection. INFO: Created TensorFlow Lite delegate for GPU. INFO: Initialized OpenCL-based API. INFO: Created 1 GPU delegate kernels. Successfully built interpreter ------VOXL TFLite Server------ 4 5 6 4 5 6 Connected to camera serverI even tried running
export.pyon the voxl emulator to account for any differences in the cpu architecture between my computer and the VOXL (between X86 and ARM) but I still get the same error. Do you think there is anything I would be missing? Thank you! -
@svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:
WARNING: Unknown model type provided! Defaulting post-process to object detection.
@svempati I will try recreating this issue today and get back to you!
-
@Zachary-Lowell-0 Just wanted to follow up to see if you were able to replicate this issue?
-
@svempati I was able to use an open source training set and then leverage the docs to make my own custom yolov8 model capable of running on the voxl2 - do you want to provide to me the actual dataset and I can try creating the model and training it on a TPU?
-
@Zachary-Lowell-0 I would first like to diagnose what is causing the yolov8 model to not work on the voxl 2 for me. Will it only work when you train the model/ export it to tflite on a TPU? I am getting the issue even if I train the yolov8 model on an open source dataset/ use the pretrained
yolov8n.ptmodel downloaded from ultralytics. I want to make sure I can train a yolov8 model on an open source dataset from scratch works on the voxl, so that I can move on to using my custom dataset.In case there is no other workaround then I should be able to send you the dataset I am using.
Thanks!