@david-devassy
Hey David,
We do support deep learning inference on VOXL2! You can check out this page on our docs to read more about our deep learning capabilities. Out of the box our YoloV5 model gets around 30 frames per second. You also mention object avoidance and SLAM using path prediction. We have our own implementation of Visual Object Avoidance (VOA) which you can read about here.
You could certainly create custom implementations of both of these on VOXL! However I want to highlight that our code for both of the above is C/C++ and not in Python for speed reasons. I can't give you a reasonable estimate about Python performance but it would likely be slower.
Hope this helps! Let me know if you have any other questions.
Thomas Patton
thomas.patton@modalai.com