@Zachary-Lowell-0 I wanted to follow up with you again on this, and the issue seems to be the model conversion process from pytorch to tflite. To confirm this, I tried it with the default yolov8n.pt downloaded from ultralytics by entering this command from the gitlab repository:
python export.py yolov8n.pt
So that I create a new yolov8n_float16.tflite file. However, running this file on voxl-tflite-server shows this output before displaying Error in TensorData<float>: should not reach here:
WARNING: Unknown model type provided! Defaulting post-process to object detection.
INFO: Created TensorFlow Lite delegate for GPU.
INFO: Initialized OpenCL-based API.
INFO: Created 1 GPU delegate kernels.
Successfully built interpreter
------VOXL TFLite Server------
4 5 6
4 5 6
Connected to camera server
I even tried running export.py on the voxl emulator to account for any differences in the cpu architecture between my computer and the VOXL (between X86 and ARM) but I still get the same error. Do you think there is anything I would be missing? Thank you!