no data while querying /run/mpa/tflight_data/request using Python
-
The following is a python script generated by bard.google.com which also runs but is not printing any data.
import ctypes
import osDefine structures with packing
class ObjectDetectionMsg(ctypes.Structure):
fields = [
("timestamp_ns", ctypes.c_int64),
("class_id", ctypes.c_uint32),
("class_name", ctypes.c_char * 64),
("class_confidence", ctypes.c_float),
("detection_confidence", ctypes.c_float),
("x_min", ctypes.c_float),
("y_min", ctypes.c_float),
("x_max", ctypes.c_float),
("y_max", ctypes.c_float),
]
pack = 1class DetectionsArray(ctypes.Structure):
fields = [
("num_detections", ctypes.c_int32),
("detections", ObjectDetectionMsg * 64),
]
pack = 1Open the pipe for reading
pipe_path = "/run/mpa/tflite_data/request"
pipe_fd = os.open(pipe_path, os.O_RDONLY)Read data continuously
while True:
# Read a detections_array structure from the pipe
data = os.read(pipe_fd, ctypes.sizeof(DetectionsArray))# Parse the data into a DetectionsArray object detections_array = DetectionsArray.from_buffer_copy(data) # Process the detections for i in range(detections_array.num_detections): detection = detections_array.detections[i] # Access detection fields as needed, e.g., print(f"Timestamp: {detection.timestamp_ns}") print(f"Class ID: {detection.class_id}") print(f"Class name: {detection.class_name.decode()}") print(f"Class confidence: {detection.class_confidence}") print(f"Detection confidence: {detection.detection_confidence}") print(f"Bounding box: ({detection.x_min}, {detection.y_min}) - ({detection.x_max}, {detection.y_max})")
-
Hey, happy to try and help with this!
I would really advise you to use
libmodal-pipe
which is a library we developed just for reading to and from pipes quickly, you can take a look at the documentation here and the source code here. I'm not an expert on the library but I believe it's needed in order to actually get data from a pipe. On a call topipe_client_open()
, the server pipe actually opens up a request to have data published asREADONLY
to the client pipe. This isn't to say the code you've written is wrong, but I do think you actually have to make this function call in order to "open up the pipe" and receive data on the other side.It would be most ideal to do all of your coding in C/C++ as this is what
libmodal-pipe
is written in. In that case, you could just open up a client pipe and on each callback have the function print out the detection data you're looking to print out. If you want to retain your Python script, though, you might write a very small helper script in C/C++ to open up that client pipe usinglibmodal-pipe
and then use a a more general format or method to pass the data to your Python script.Hope this helps!!
Thomas Patton
thomas.patton@modalai.com -
@thomas Thanks Thomas! will get back to you....i'm sure!!
-
@thomas ok massive confusion.
i was able to download the example hello scripts and compile them and they work fine.
modal-hello-client.c
modal-hello-pause.c
modal-hello-server.c
modal-hello-sink.c
modal-kill-pipe.c
modal-pipe-ping.ci then made copies of these and replaced hello with tflite_data and compiled successfuly
however, i cant seem to tie into tflite_data.so when i'm running voxl-tflite-server two pipes show up, ie tflite and tflite_data
Doesnt that mean these pipes open already?
i tried my modal-tflite_data-client and got the following error
channel 0 connected to server
sending tflite_data to server control pipe
failed to send control command to server
ERROR: control pipe not available
ERROR, pipe is not of type "text"
closingi just want to get the box coordinates from the pipe. Help me Obi Wan Kenobi!
-
I appreciate that you think I've ascended to the Jedi level... I don't know if I agree but I'll keep trying to help you out!
You're getting the error because of the line that has the
pipe_is_type()
function. That function is, for example purposes, checking that a pipe has type"text"
which is the type the pipe is publishing in the example code.But you're modifying the example code to read from a different pipe and so the type might not be text. To check what type was being published to the pipe, I ran:
voxl2:/$ cat /run/mpa/tflite_data/info { "name": "tflite_data", "location": "/run/mpa/tflite_data/", "type": "ai_detection_t", "server_name": "voxl-tflite-server", "size_bytes": 16384, "server_pid": 2910 }
Okay, so we can see here
"type" : "ai_detection_t"
which you may remember as theai_detection_t
struct fromvoxl-tflite-server
. So instead, you need to modify thatpipe_is_type()
function to have it check for"ai_detection_t"
instead of"text"
. You could, also, just delete the line as it's just a safety check. But better to be safe!!Now that should fix your immediate issue. The final piece of the puzzle, here, will be creating a callback function and registering your client to it. I'm not a
libmodal-pipe
expert but I think the way we do it in the example code will work for you. The simple callback that's defined there just passes raw bytes of data as achar*
and so what I think you'll want to do is typecast that data to typeai_detection_t*
. I'm going to double check this sometime today. But then you'll have a struct of typeai_detection_t
and you can access the member variables to get what you're looking for. For right now, you should just copy-paste theai_detection_t
struct definition in your code so that your module will know what anai_detection_t
is. In the future we'll expose this header in/usr/include
.Hope this helps!!
Thomas Patton
thomas.patton@modalai.com -
@sansoy so i updated my copy named modal-tflite_data-client.c with fields listed in tflite_data/info
{
"name": "tflite_data",
"location": "/run/mpa/tflite_data/",
"type": "ai_detection_t",
"server_name": "voxl-tflite-server",
"size_bytes": 16384,
"server_pid": 25994
}Recompiled the script and ran
voxl2:/run/mpa/tflite_data$ modal-tflite_data-client
channel 0 connected to server
sending tflite_data to server control pipe
failed to send control command to server
ERROR: control pipe not availablebut when i put a picture in front of it and the object detection is working i'm getting!
voxl2:/run/mpa/tflite_data$ modal-tflite_data-client
channel 0 connected to server
sending tflite_data to server control pipe
failed to send control command to server
ERROR: control pipe not available
received 172 bytes on channel 0: LXOV��O6
received 172 bytes on channel 0: LXOV��F[6
received 172 bytes on channel 0: LXOV7?_�6
received 172 bytes on channel 0: LXOV! ��6
received 172 bytes on channel 0: LXOV
���6
received 172 bytes on channel 0: LXOV����6 -
Sweet, you should be able then to cast the
char* data
coming into your callback function into typeai_detection_t*
and then access the member variables for x/y min/max for the boxes.Thomas Patton
-
@thomas so i included ai_detection.h and cast the char* data variable and compiled successfully and several of my services died ...lol
Service Name | Enabled | Running | CPU Usage
docker-autorun | Disabled | Not Running |
modallink-relink | Disabled | Not Running |
voxl-camera-server | Enabled | Not Running |
voxl-cpu-monitor | Enabled | Not Running |
voxl-dfs-server | Disabled | Not Running |
voxl-feature-tracker | Disabled | Not Running |
voxl-flow-server | Disabled | Not Running |I've tried everything to restart both camera server and cpu monitor and no luck. Rebooted, Cycled Power,
Any thoughts on how to recover from this? -
@thomas from my syslog
Jan 23 00:46:51 m0054 voxl-camera-server[4202]: /usr/bin/voxl-camera-server: symbol lookup error: /usr/bin/voxl-camera-server: undefined symbol: pipe_validate_mavlin
k_message_t
Jan 23 00:46:51 m0054 systemd[1]: voxl-camera-server.service: Main process exited, code=exited, status=127/n/a
Jan 23 00:46:51 m0054 systemd[1]: voxl-camera-server.service: Failed with result 'exit-code'.
Jan 23 00:46:51 m0054 voxl-px4[1656]: ERROR [muorb] SLPI: Failed to parse version request response packet!
Jan 23 00:46:53 m0054 voxl-px4[1656]: message repeated 534 times: [ ERROR [muorb] SLPI: Failed to parse version request response packet!]
Jan 23 00:46:53 m0054 systemd[1]: voxl-camera-server.service: Service hold-off time over, scheduling restart.
Jan 23 00:46:53 m0054 systemd[1]: voxl-camera-server.service: Scheduled restart job, restart counter is at 1.
Jan 23 00:46:53 m0054 systemd[1]: Stopped voxl-camera-server.
Jan 23 00:46:53 m0054 systemd[1]: Started voxl-camera-server.
Jan 23 00:46:53 m0054 voxl-camera-server[4226]: /usr/bin/voxl-camera-server: symbol lookup error: /usr/bin/voxl-camera-server: undefined symbol: pipe_validate_mavlin
k_message_t
Jan 23 00:46:53 m0054 systemd[1]: voxl-camera-server.service: Main process exited, code=exited, status=127/n/a
Jan 23 00:46:53 m0054 systemd[1]: voxl-camera-server.service: Failed with result 'exit-code'.
Jan 23 00:46:53 m0054 voxl-px4[1656]: ERROR [muorb] SLPI: Failed to parse version request response packet!Actually several enabled services wont restart
Service Name | Enabled | Running | CPU Usagedocker-autorun | Disabled | Not Running |
modallink-relink | Disabled | Not Running |
voxl-camera-server | Enabled | Not Running |
voxl-cpu-monitor | Enabled | Not Running |
voxl-dfs-server | Disabled | Not Running |
voxl-feature-tracker | Disabled | Not Running |
voxl-flow-server | Disabled | Not Running |
voxl-imu-server | Enabled | Running | 0.0%
voxl-lepton-server | Enabled | Not Running |
voxl-mavcam-manager | Disabled | Not Running |
voxl-mavlink-server | Enabled | Not Running |
voxl-modem | Disabled | Not Running |
voxl-neopixel-manager | Disabled | Not Running |
voxl-portal | Enabled | Not Running |
voxl-px4-imu-server | Disabled | Not Running |
voxl-px4 | Enabled | Running | 32.9%
voxl-qvio-server | Disabled | Not Running |
voxl-rangefinder-server | Disabled | Not Running |
voxl-remote-id | Disabled | Not Running |
voxl-softap | Disabled | Not Running |
voxl-static-ip | Disabled | Not Running |
voxl-stitcher | Disabled | Not Running |
voxl-streamer | Disabled | Not Running |
voxl-tag-detector | Disabled | Not Running |
voxl-tflite-server | Enabled | Running | 0.8%
voxl-time-sync | Disabled | Not Running |
voxl-uvc-server | Disabled | Not Running |
voxl-vision-hub | Enabled | Not Running |
voxl-wait-for-fs | Enabled | Completed |
voxl-wfb-rx-video | Disabled | Not Running |
voxl-wfb-rx | Disabled | Not Running |
voxl-wfb-tx-video | Disabled | Not Running |
voxl-wfb-tx | Disabled | Not Running |
voxl2:/$all i did was compile and make install my modal-tflite_data-client.c file
-
@thomas its all working now.
had to reinstall the latest sdk. I'm guessing make install is a no-no. so just did a make and ran the executable.
voxl2:/libmodal-pipe/build/examples(master)$ ./modal-tflite_data-client
channel 0 connected to server
Bounding Box Coordinates:
score: 0.558105, x_min: 406.000000, y_min: 417.000000, x_max: 535.000000, y_max: 472.000000
Bounding Box Coordinates:
score: 0.429199, x_min: 427.000000, y_min: 370.000000, x_max: 573.000000, y_max: 426.000000
Bounding Box Coordinates:
score: 0.617676, x_min: 498.000000, y_min: 300.000000, x_max: 655.000000, y_max: 358.000000
Bounding Box Coordinates:
score: 0.416748, x_min: 505.000000, y_min: 302.000000, x_max: 659.000000, y_max: 359.000000
Bounding Box Coordinates:
score: 0.739258, x_min: 506.000000, y_min: 304.000000, x_max: 667.000000, y_max: 364.000000
Bounding Box Coordinates:
score: 0.493652, x_min: 4.000000, y_min: -1.000000, x_max: 781.000000, y_max: 109.000000
Bounding Box Coordinates:
score: 0.547852, x_min: 509.000000, y_min: 306.000000, x_max: 664.000000, y_max: 363.000000
Bounding Box Coordinates:
score: 0.640625, x_min: 505.000000, y_min: 310.000000, x_max: 660.000000, y_max: 369.000000 -
-
@thomas Hey thomas, we've got the handshaking between your C helper program and our python script working well. i successfully loaded Yolov8 on the voxl and it works really well on images. Was wondering if we could open the camera pipe the same as the tflite_data pipe to run inference on the frames in realtime. i noticed in your /data/web_root/index.html that you are using websockets for serving the videos in the browsers. is this the better route?
-
When you say "open the camera pipe" I'm not fully sure I understand what you mean.
voxl-tflite-server
in its current state already opens up a camera pipe for input images and then publishes its predictions out in thattflite_data
pipe and so if you're asking if that's possible then absolutely. Or are you trying to make an output image pipe or something with the predictions overlayed or something?Sorry, just not fully sure what you're asking.
Thomas Patton
-
@thomas i'm not using voxl-tflite-server. On my voxl2 i installed the latest version of YOLOv8 from Ultralytics https://docs.ultralytics.com/ and it works well when i run inference on local images. i wanted to test just using a local python script that streams video frames from the hires_small_color camera and runs inference.
-
Oh ok I see what you're going for then. However we don't currently have an API for doing streaming video in Python, only C++ through
libmodal-pipe
like we do in modules likevoxl-tflite-server
.If you really want to use Python, you could write a small C++ helper program to take in images from the camera and then use some sort of event-driven framework to pass those images to your Python script. Perhaps you could do something clever with just writing them to a temporary directory and setting up a callback-based interface in Python to detect when new files are present.
Sorry for the indirect answer, hope this at least helps some!!
Thomas Patton
-
@thomas I already wrote a c helper function that uses libmodal-pipe to pipe tflite_data that my python script reads in realtime. it works well
per your message " small C++ helper program to take in images from the camera". how would i modify my c helper to use libmodal-pipe to stream data from one of the cameras? -
Oh great, if you already have a function that works with
libmodal-pipe
then you shouldn't have much trouble writing one for the cameras. Take a look at these lines for the initial code to register the callback function to the camera pipe, this sets up_camera_helper_cb
to receive images on callback. For an example of parsing that data, maybe check out these lines in a different module. There's a bit of extra info here but this shows the gist of casting the rawchar*
bytes into something like an OpenCV image. Definitely modify as needed to fit your use case!Keep me posted, happy to help however I can!
Thomas Patton