No detections when running custom YOLOv8 model on voxl-tflite-server
-
These are all the potential data types that voxl-tflite-server is expecting.
-
I see, so my model is not supported by the voxl-tflite-server since it is float16 and the tflite server only supports 32 bit precision if we want to use floating point values. Am I understanding that correctly or am I missing something? Cause the default YOLOv5 model that is included in the VOXL 2 model (
yolov5_float16_quant.tflite) is also of float 16 precision so I wonder how the functions intensor_data.hhandle that.One question, what command did you use to view these error logs from voxl-tflite server?
Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here Error in TensorData<float>: should not reach here -
@svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:
I just ran voxl-tflite-server directly from the command line instead of in the background via systemd - aka run
voxl-tflite-serverdirectly on the command line. I would recommend NOT quantizing your model as the directions in the train yolov8 do not recommend that.Zach
-
@Zachary-Lowell-0 Got it, I will try that out and will let you know if I have any more questions. Thanks for your help!
-
@Zachary-Lowell-0 I wanted to follow up with you again on this, and the issue seems to be the model conversion process from pytorch to tflite. To confirm this, I tried it with the default
yolov8n.ptdownloaded from ultralytics by entering this command from the gitlab repository:python export.py yolov8n.ptSo that I create a new
yolov8n_float16.tflitefile. However, running this file onvoxl-tflite-servershows this output before displayingError in TensorData<float>: should not reach here:WARNING: Unknown model type provided! Defaulting post-process to object detection. INFO: Created TensorFlow Lite delegate for GPU. INFO: Initialized OpenCL-based API. INFO: Created 1 GPU delegate kernels. Successfully built interpreter ------VOXL TFLite Server------ 4 5 6 4 5 6 Connected to camera serverI even tried running
export.pyon the voxl emulator to account for any differences in the cpu architecture between my computer and the VOXL (between X86 and ARM) but I still get the same error. Do you think there is anything I would be missing? Thank you! -
@svempati said in No detections when running custom YOLOv8 model on voxl-tflite-server:
WARNING: Unknown model type provided! Defaulting post-process to object detection.
@svempati I will try recreating this issue today and get back to you!
-
@Zachary-Lowell-0 Just wanted to follow up to see if you were able to replicate this issue?
-
@svempati I was able to use an open source training set and then leverage the docs to make my own custom yolov8 model capable of running on the voxl2 - do you want to provide to me the actual dataset and I can try creating the model and training it on a TPU?
-
@Zachary-Lowell-0 I would first like to diagnose what is causing the yolov8 model to not work on the voxl 2 for me. Will it only work when you train the model/ export it to tflite on a TPU? I am getting the issue even if I train the yolov8 model on an open source dataset/ use the pretrained
yolov8n.ptmodel downloaded from ultralytics. I want to make sure I can train a yolov8 model on an open source dataset from scratch works on the voxl, so that I can move on to using my custom dataset.In case there is no other workaround then I should be able to send you the dataset I am using.
Thanks!
-
Let me try and train a custom model and run a loom on it in the next few days and get that over to you showing how I do it!
-
https://www.loom.com/share/bf52e252ab09444bb366f265a3f36dc5
Alright take a look at this loom please - it might help point you in the right direction in terms of training your model.
Zach
-
@Zachary-Lowell-0 Thanks for sharing the video! I looked at it, and I pretty much did the same steps you did. It might be worth mentioning that I had to modify the Dockerfile because the one in the documentation was throwing a version mismatch error when installing the
onnxpart.This was the original docker command
RUN pip3 install ultralytics tensorflow onnx "onnx2tf>1.17.5,<=1.22.3" tflite_support onnxruntime onnxslim "onnx_graphsurgeon>=0.3.26" "sng4onnx>=1.0.1" tf_kerasI modified it to this
RUN pip3 install ultralytics tensorflow "onnx2tf>1.17.5,<=1.22.3" tflite_support onnxruntime onnxslim "onnx_graphsurgeon>=0.3.26" "sng4onnx>=1.0.1" tf_keras RUN pip3 install onnx==1.20.1I don't think this should cause any issues, but could you confirm?