Example compiling custom TFLite model?



  • Hello,

    I've been looking through the documentation about running TFLite models on the drone, in the HelloTFLiteGPU example it starts with the ipk already generated, and in the voxl-tflite-server there's a .tflite file but it's not clear to me that I'm just able to swap in my own .tflite model. I'm wondering if you could direct me to documentation on (or outline here) how to start with my own .tflite model and get it running on the drone?
    Sorry if i've missed it somewhere!
    Cheers,



  • Hello again,

    still working on this and hoping for a response. I've updated the code based off the latest push to the voxl-tflite-server repo, but i'm getting an error when I try to run it with my own tflite model (my_custom_model.tflite, which i've trained and tested on my workstation, then put in the voxl-tflite-server/dnn folder):

    yocto:~$ voxl-tflite-server -d
    Enabling debug mode
    =================================================================
    skip_n_frames:                    0
    =================================================================
    =================================================================
    model:                            /usr/bin/dnn/my_custom_model.tflite
    =================================================================
    =================================================================
    input_pipe:                       /run/mpa/tracking/
    =================================================================------VOXL TFLite Server------Loaded model /usr/bin/dnn/my_custom_model.tflite
    Resolved reporter
    INFO: Created TensorFlow Lite delegate for GPU.
    GPU acceleration is SUPPORTED on this platform
     !!!!!!!!!!!!!!!!!!!!!!! line 257
     !!!!!!!!!!!!!!!!!!!!!!! line 259
    ERROR: Failed to create Texture2D (clCreateImage)Invalid image size
    ERROR: Falling back to OpenGL
    Segmentation fault
    

    The print out statements are of course mine, and they start at line 255 in the voxl-tflite-server/server/models.cpp file

    TfLiteDelegatePtrMap delegates_ = GetDelegates(tflite_settings);
    
    fprintf(stderr, "\n\n !!!!!!!!!!!!!!!!!!!!!!! line 257\n\n");
        for (const auto& delegate : delegates_){
            fprintf(stderr, "\n\n !!!!!!!!!!!!!!!!!!!!!!! line 259\n\n");
            delegate.second.get();
            fprintf(stderr, "\n\n !!!!!!!!!!!!!!!!!!!!!!! line 259b\n\n");
            interpreter->ModifyGraphWithDelegate(delegate.second.get());
            fprintf(stderr, "\n\n !!!!!!!!!!!!!!!!!!!!!!! line 259c\n\n");
            if (interpreter->ModifyGraphWithDelegate(delegate.second.get()) != kTfLiteOk){
                fprintf(stderr, "\n\n !!!!!!!!!!!!!!!!!!!!!!! line 261\n\n");
                printf("Failed to apply delegate\n");
                break;
            }   
            else{
                fprintf(stderr, "\n\n !!!!!!!!!!!!!!!!!!!!!!! line 266\n\n");
                if (mobilenet_data->en_debug) printf("Applied delegate \n");
                break;
            }   
        }
    

    So it seems like the ModifyGraphWithDelegate function is throwing this error, and my guess is that it's because of some parameter in the tflite_settings struct that's passed into the delegate, although I'm not sure. I'm hoping that one of y'all may have some insight into this issue, or be able to link me to some resources. I've been searching around but not finding much on Google.
    Thanks,



  • Hi tdewolf, sorry for the delayed response. It appears that your model does not support an OpenCL backend for GPU usage. Our Tensorflow library on voxl has disabled the default OpenGL backend that it is attempting to "fall back" on, resulting in a segmentation fault. I have seen this occur with non-quantized models running on voxl, but am not sure of the specifics for your custom model. An option to test this would be commenting out lines 255-266 where the GPU delegates are assigned in the server/models.cpp file (https://gitlab.com/voxl-public/modal-pipe-architecture/voxl-tflite-server/-/blob/master/server/models.cpp#L255) and running your model strictly using the CPU.
    If you like, you could also attach your custom tflite model file and any training scripts so I can take a look at the architecture and see why it is failing to apply the delegate.



  • Hello Matt,

    thanks! This is super useful. When I try commenting out the delegate code I get

    Enabling debug mode
    =================================================================
    skip_n_frames:                    0
    =================================================================
    =================================================================
    model:                            /usr/bin/dnn/running_shoe.tflite
    =================================================================
    =================================================================
    input_pipe:                       /run/mpa/tracking/
    =================================================================------VOXL TFLite Server------Loaded model /usr/bin/dnn/running_shoe.tflite
    Resolved reporter
    Avoiding our problems.... !!!!!!!!!!!!!!!!!!!!!!! line 257------Setting TFLiteThread to ready!! W: 480 H: 640 C:3
    ------Popping index 0 frame 21596 ...... Queue size: 1Fault address: 0x585e5c5b58595f
    Address not mapped.
    Segmentation fault
    

    Also, I am running a non-quantized TFLite model, when I ran the quantized version I was getting an error about an unsupported operation (that looked to be related to quantization) so I turned off quantization. I'll try the quantized version again and get you the output from that as well. If that doesn't work then I'll clean up the code and take you up on the offer to look over the scripts!
    Cheers,


Log in to reply