Seeking Guidance on Handling voxlCAM Sensor Data with Jetson Nano
-
I'm trying to connect the ModalAI voxlCAM to a Jetson Nano and establish a means to handle the various sensor data and video streams provided by the voxlCAM through code. However, I am struggling to fully understand the process even after reading the manuals. I have reviewed the documentation available at voxl SDK, but I still find it challenging to grasp the necessary steps. I believe it involves using C++, ROS, MAVLink, and OpenCV, but I'm unsure where to find tutorial-like information within the references. Could you provide the URLs for the relevant links? If such information does not currently exist, could you guide me on how to proceed or where to find the necessary details?
-
@藤田真也 You will need to be a lot more specific with your questions. Is there a reason you need the Jetson? The VOXL has quite a bit of processing power.
-
@Eric-Katzfey
Hello Eric,Thank you for your response. I would like to provide more details on why I am using the Jetson Nano.
Honestly, there is no absolute reason to use the Jetson Nano. However, I am considering using this device for future work purposes and for learning about self-localization functionality. Therefore, I chose the Jetson Nano as a learning tool.
Specifically, I want to learn how to process the data obtained from voxlCAM on the Jetson Nano. I am particularly interested in the following points:
How to acquire data from various sensors of voxlCAM and process it on the Jetson Nano.
Steps to obtain the video stream from voxlCAM using C++ and process it using OpenCV.
How to integrate voxlCAM with Jetson Nano using ROS and MAVLink.
If there are any tutorials or examples available for integrating these functionalities, I would like to review them.
If you know where I can find this information, I would greatly appreciate it if you could provide the links or documents. If such guides do not exist at this time, any advice on how to proceed and where to find the necessary details would be very helpful.Thank you for your assistance.
Best regards,
Shinya Fujita -
@藤田真也 There's a fair amount of documentation on our website. You can start with this. docs.modalai.com
-
@藤田真也 , please take a look at the architecture diagram shown here https://docs.modalai.com/images/voxl-sdk/MPA-1.0-diagram_revC_Full.png (from https://docs.modalai.com/mpa/)
The ModalAI software SDK is designed to do majority of sensor processing on VOXL2. Sending all of the sensor data (specifically RAW camera data) may not be possible, depending on the number of cameras, resolution and FPS that you want to use.
We have utilities that can convert the native sensor data (images, IMU) to ROS / ROS2:
- https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros
- https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros2
However, you would most likely need to use some sort of compressed image format to send images to another computer.
Before we get into more detail, are you able to list all the sensor data you want to send to another computer (Jetson Nano). If it is a camera sensor, please specify image resolution and FPS.
Alex
-
@Eric-Katzfey said in Seeking Guidance on Handling voxlCAM Sensor Data with Jetson Nano:
@藤田真也 There's a fair amount of documentation on our website. You can start with this. docs.modalai.com
Hello Eric,
Thank you for your response. I understand that the VOXL platform itself has significant processing capabilities, but I would like to clarify my intent and needs a bit more.
I am using the Jetson Nano as a learning tool for a few reasons:
To explore and understand self-localization functionalities.
To learn how to process sensor data and video streams from voxlCAM on an external processor like Jetson Nano.
To prepare for potential future work where I might need to use both VOXL and Jetson Nano.
While I have looked through the available documentation, I am still facing difficulties with the specific integration steps. Particularly, I am interested in the following:Acquiring and processing data from voxlCAM sensors on Jetson Nano.
Using C++ to obtain and process video streams from voxlCAM using OpenCV.
Integrating ROS and MAVLink for communication between voxlCAM and Jetson Nano.
Detailed tutorials or examples that can guide me through these processes.
Could you please provide links or documents that offer step-by-step guidance on these topics? If such resources do not exist, I would greatly appreciate any advice on how to proceed and where I might find the necessary details.Thank you for your assistance and understanding.
Best regards,
Shinya Fujita -
@Alex-Kushleyev said in Seeking Guidance on Handling voxlCAM Sensor Data with Jetson Nano:
@藤田真也 , please take a look at the architecture diagram shown here https://docs.modalai.com/images/voxl-sdk/MPA-1.0-diagram_revC_Full.png (from https://docs.modalai.com/mpa/)
The ModalAI software SDK is designed to do majority of sensor processing on VOXL2. Sending all of the sensor data (specifically RAW camera data) may not be possible, depending on the number of cameras, resolution and FPS that you want to use.
We have utilities that can convert the native sensor data (images, IMU) to ROS / ROS2:
- https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros
- https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros2
However, you would most likely need to use some sort of compressed image format to send images to another computer.
Before we get into more detail, are you able to list all the sensor data you want to send to another computer (Jetson Nano). If it is a camera sensor, please specify image resolution and FPS.
Alex
Hello Alex,
Thank you very much for your detailed and helpful response. I appreciate the time you took to explain the architecture and provide relevant resources.
I understand that the VOXL platform is designed to handle most of the sensor processing onboard. Given this, I plan to follow the suggested architecture as much as possible for my application development.
Thank you for pointing out the utilities for converting native sensor data to ROS/ROS2:
voxl-mpa-to-ros
voxl-mpa-to-ros2
Based on your advice, my current plan is as follows:Use the provided utilities to convert the sensor data from voxlCAM to ROS within the voxlCAM.
Compress the sensor data within the voxlCAM.
Publish the compressed sensor data from the voxlCAM and subscribe to it on the Jetson Nano.
Transfer the subscribed compressed data from the Jetson Nano to AWS.
In the future, I plan to handle more complex processing on the Jetson Nano, such as generating point cloud data or processing image recognition data from voxlCAM.
Develop an application on AWS to receive the video data and stream it to the web.
The sensor data I plan to use includes all the sensors on the voxlCAM:Stereo camera
Optical flow sensor
A camera that is present on the device but not mentioned in the datasheet
IMU
I have referred to the following datasheet for details: voxl-cam-datasheet.Could you please confirm if my understanding and approach are correct? Additionally, any guidance or resources on setting up the data flow as described, especially for the video streaming to AWS, would be greatly appreciated.
Thank you again for your assistance and support.
Best regards,
Shinya Fujita -
@藤田真也 , we do not have any examples for sending data to AWS, unfortunately.
My recommendation is to start step by step and make sure you can do the following:
- install the latest SDK on your VOXL2 (https://docs.modalai.com/voxl-suite/)
- configure
voxl-camera-server
for your camera set up (https://docs.modalai.com/voxl2-camera-configs/) - make sure you can see the camera images using
voxl-portal
(https://docs.modalai.com/voxl-portal/#voxl-portal) - set up
voxl-mpa-to-ros
orvoxl-mpa-to-ros2
based on your needs - stream camera data via ROS to a different machine (local network or in the cloud). Please note that bandwidth requirement for uncompressed images will be high, so you might want to choose ROS to compress the images into jpegs before sending.
- after you become familiar with the standard work flow on VOXL, it should be straightforward for you send data to AWS (assuming you have experience with AWS).
Alex