So for anyone who is facing the same problems as me.
- If you want to train using tf2 you can't really take models form tf1 model zoo because tf2 feature extractors are not for the ones in tf1(even if tensorflow mentioned that you can use every tf1 model for later versions maybe outdated)
- So for tf1 Model Zoo go with tf1 1.15.1 for tf2 Model Zoo just take the newest tf version
- For tf2 take a ssd model from tf2 model zoo(ssd is only supported in export for tflite use)
- (Expected that you set up model zoo dir and installed all pip packages for them):
4.1 go to models/research/object_detection
4.2 .use export_tflite_ssd_graph.py
4.3. use the code mentioned in the docs from modalai to make tflite model
4.4 put the tflite model on the voxl
4.5 cd to the dir where your_model.tflite is type:
mv your_model.tflite /usr/bin/dnn/ssdlite_mobilenet_v2_coco.tflite
Why renaming the model?
Somewhere i Modal AI Code for tflite server theres a line that only take model with the name.
If your not renaming it you will get a (Model is not supported Error).
4.6 Replace the classes in the file beyond with your classes
4.7 Run tflite Server
For me that worked with SSD MobileNet v2 320x320(don't get confused its actually 300x300 don't know why) from:
with the base model given there. Ill have to see if it really works after training.
I will probably make a github repository in the Future when i have time featuring:
+setting up tf2 and tf1 on your local machine with gpu
+export tflite and put on voxl
because it was such a pain in the ass to setup all this when not being familiar with machine learning, and poor documentation.