Building Tensorflow Lite From Source w/ NNAPI
-
Hello,
I am attempting to build Tensorflow Lite v2.6 on the RB5 with NNAPI enabled. After following the tensorflow tutorial I am unable to build with NNAPI enabled even with the correct CMAKE Flags to enable the build.
The problem is a missing pre-processor definition of
__ANDROID__=1
.After defining that, it seemed to cause some issues because of missing andoir includes such as.
sys/system_properties.h
(copied cutils/sys/system_properties to resolve.
and
android/api-level.h
On the Qualcomm Website here:
https://developer.qualcomm.com/qualcomm-qcs610-development-kit/learning-resources/neural-network-api
It says I should be able to use NNAPI. I understand that the gstreamer plugin is an option, but I would like to interface tensorflow lite directly. Is there an SDK that will help me build this? I've looked through most of them and they don't seem to have the files that I would need most likely.
Has anyone made any progress with this or could point me in a good direction.
-
Hi @mrawding ,
What hardware are you using????
We are very light with RB5 Flight docs at this point, I have headers from our system image build that I can try to share if that would help, but we've not gone too far yet in this area.
We have our system image in an emulator as well, but still pretty alpha level: https://gitlab.com/voxl-public/rb5-flight/rb5-flight-emulator
-
Hi Travis,
I am using the EVT board at the moment. The problem is that is seems to be missing the Android NDK on board. We need these toolchain libraries onboard to compile tensorflowlite with NNAPI and hexagon acceleration delegates. The only thing that might work as of now is gpu delegation, but I haven't tested it at runtime just yet.
How was this done on the voxl? I tried downloading the hexagon sdk, but I am unsure how to compile and install the ndk on the RB5.
-
Hey @mrawding,
For tesorflowlite on VOXL we currently use the voxl-tflite-server which uses the GPU delegate. This is a custom tflite build we made.
-
@modalab Just bumping this thread. It would be very helpful to build NNAPI on both the voxl and RB5 as this would enable the ability to run both PyTorch and Tensorflow Lite, greatly enhancing the ease of running machine learning models on the platform. From what I can gather, NNAPI is needed to utilize the neural processor on the RB5.
-
Hi @snogar ,
Roger! The hood is open right now, we'll be digging in here in the next week(s) and will update you.
-