Hi ModalAI Team,
Can you provide guidance on running multiple TFLite models onboard the QRB5165 board (voxl-suite 1.1.3-1, tflite 2.8.0-2)? I've read the SDK documentation but the following is unclear to me:
1. Do you need to configure and run multiple instances of voxl-tflite-server?
I've tried this by adding additional service definitions into /etc/systemd/system (e.g., one called voxl-tflite-server-pose.service) and running via systemctl. Does each instance need to be pointed at a discrete config file (e.g., /etc/modalai/voxl-tflite-server-pose.conf)?
On testing this approach I was able to run several parallel instances of voxl-tflite-server, but they all seem to reference the default config file (/etc/modalai/voxl-tflite-server.conf), where I can only define a single model. I'm unable to specify a config file in the systemd.service file as the -c option seems to be reserved for config (i.e., it doesn't actually run the model).
2. Do you use a single config file and voxl-tflite-server service to run multiple models?
Given that the voxl-tflite-server service seems locked to the voxl-tflite-server.conf file, do you instead specify multiple models in the config file (setting allow_multiple and output_pipe_prefix accordingly) and let the service manage system resources etc? If this is the case, have you got an example of the syntax? (I tried adding a second model definition to the conf but it didn't work).
3. Am I missing something like the need to manually publish another pipe from the additional servers?
Thanks,
Nick