Using MPA within voxl-docker-mavsdk-cpp
-
I see the supported method for using MAVSDK with Voxl is a preconfigured Docker container, which I have successfully set up with voxl-docker to run MAVSDK and control my vehicle from within the container.
I see that by default /run/mpa/ is shared into this container, which is great. However, when I try the libmodal_pipe hello_world example, I get an error:
fatal error: modal_start_stop.h: No such file or directory
Clearly this container isn't set up with the libmodal_pipe libraries. What's the proper way to use the MPA pipes from within the voxl-docker-mavsdk-cpp container?
-
@Evan-Hertafeld Hey I wanted to know whether you were able to connect to mpa server and access the data inside mavsdk
-
@Evan-Hertafeld said in Using MPA within voxl-docker-mavsdk-cpp:
libmodal_pipe
You need to share all those files via a volume mount alongside with the /run/mpa directory to allow docker to probe the data and grab from an MPA pipe.
so in your docker run something like
-v /usr/include/modal_start_stop.h:/usr/include/modal_start_stop.h
needs to be done - the other option is you just download the actual MPA debian inside your docker image and call it a day - then you will have all the necessary headers, libs, binaries, and you just need to share /run/mpa as a volume to access the data.Zach
-
@Zachary-Lowell-0 Hi Zach and team, I have been using the voxl docker mavsdk python container successfully for MAVSDK based drone control, and now I would like to read from VOXL’s Modal Pipe Architecture (MPA) inside the same container. I saw your suggestion about either mounting header files like modal_start_stop.h or installing the full MPA package inside the container. Could you please clarify the best way to do this? Should I install the libmodal_pipe Debian package inside the Docker image using the Dockerfile, or manually copy the headers and libraries from the host? Also, is this setup compatible with the voxl docker mavsdk python container, or is it mainly meant for the C++ MAVSDK version?
I am trying to listen to a custom voxl tflite server service that gives inference output from my custom neural networks. I want to deploy some motion planning in Python based on the neural net output. I would really appreciate more specific setup instructions or an example Dockerfile that includes MPA support. Thank you.