@Alex-Kushleyev said in Seeking Guidance on Handling voxlCAM Sensor Data with Jetson Nano:
@藤田真也 , please take a look at the architecture diagram shown here https://docs.modalai.com/images/voxl-sdk/MPA-1.0-diagram_revC_Full.png (from https://docs.modalai.com/mpa/)
The ModalAI software SDK is designed to do majority of sensor processing on VOXL2. Sending all of the sensor data (specifically RAW camera data) may not be possible, depending on the number of cameras, resolution and FPS that you want to use.
We have utilities that can convert the native sensor data (images, IMU) to ROS / ROS2:
- https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros
- https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros2
However, you would most likely need to use some sort of compressed image format to send images to another computer.
Before we get into more detail, are you able to list all the sensor data you want to send to another computer (Jetson Nano). If it is a camera sensor, please specify image resolution and FPS.
Alex
Hello Alex,
Thank you very much for your detailed and helpful response. I appreciate the time you took to explain the architecture and provide relevant resources.
I understand that the VOXL platform is designed to handle most of the sensor processing onboard. Given this, I plan to follow the suggested architecture as much as possible for my application development.
Thank you for pointing out the utilities for converting native sensor data to ROS/ROS2:
voxl-mpa-to-ros
voxl-mpa-to-ros2
Based on your advice, my current plan is as follows:
Use the provided utilities to convert the sensor data from voxlCAM to ROS within the voxlCAM.
Compress the sensor data within the voxlCAM.
Publish the compressed sensor data from the voxlCAM and subscribe to it on the Jetson Nano.
Transfer the subscribed compressed data from the Jetson Nano to AWS.
In the future, I plan to handle more complex processing on the Jetson Nano, such as generating point cloud data or processing image recognition data from voxlCAM.
Develop an application on AWS to receive the video data and stream it to the web.
The sensor data I plan to use includes all the sensors on the voxlCAM:
Stereo camera
Optical flow sensor
A camera that is present on the device but not mentioned in the datasheet
IMU
I have referred to the following datasheet for details: voxl-cam-datasheet.
Could you please confirm if my understanding and approach are correct? Additionally, any guidance or resources on setting up the data flow as described, especially for the video streaming to AWS, would be greatly appreciated.
Thank you again for your assistance and support.
Best regards,
Shinya Fujita