VOXL2 NPU/GPU Usage for general neural networks
-
Hello, I work at JPL and need access to the GPU and NPU (VOXL2) for some neural network inference.
Our use case does not process images (the network inputs are vectors) and we would like to run custom docker images that run independently of the VOXL2 SDK. Essentially, we would like to use the VOXL2 as a general single-board computer (like an RPi) and would like some easy access to the GPU/NPU for inference (like .to(device) on CUDA-enabled GPUs). Is this possible?
Thanks