AR0144 RGB output on VOXL2
-
After adding in some extra debug statements in the
voxl-camera-server
codebase, these are my findings:The mpa_ion_buf_pool_alloc_bufs function is allocating 16 buffers but only properly initializing 1 buffer out of 16. The remaining 15 buffers contain corrupted or invalid data, as evidenced by the debug output showing mostly NULL pointers, invalid file descriptors, and bad memory addresses.
[DEBUG] Buffer[0]: vaddress=0x5588128d50, fd=-2010712080, size=85, width=85, height=-2013126160, stride=85, slice=-2011714000 [DEBUG] Buffer[1]: vaddress=(nil), fd=-1, size=0, width=0, height=0, stride=0, slice=0 [DEBUG] Buffer[2]: vaddress=(nil), fd=-1, size=0, width=0, height=0, stride=0, slice=0 [DEBUG] Buffer[3]: vaddress=(nil), fd=-1, size=0, width=127, height=-2044747776, stride=127, slice=0 [DEBUG] Buffer[4]: vaddress=(nil), fd=800, size=127, width=127, height=0, stride=0, slice=102 [DEBUG] Buffer[5]: vaddress=0x24000000000000, fd=1024, size=127, width=0, height=104, stride=0, slice=0 [DEBUG] Buffer[6]: vaddress=0x50000000000, fd=-2014735536, size=0, width=0, height=0, stride=0, slice=0 [DEBUG] Buffer[7]: vaddress=0x60000000320, fd=0, size=0, width=0, height=0, stride=0, slice=0 [DEBUG] Buffer[8]: vaddress=0x400, fd=0, size=0, width=0, height=0, stride=0, slice=0 [DEBUG] Buffer[9]: vaddress=0x5587d77ba0, fd=0, size=0, width=0, height=0, stride=0, slice=0 [DEBUG] Buffer[10]: vaddress=(nil), fd=-2079350784, size=0, width=0, height=0, stride=0, slice=0 [DEBUG] Buffer[11]: vaddress=(nil), fd=-2082496512, size=0, width=0, height=0, stride=2359296, slice=0 [DEBUG] Buffer[12]: vaddress=0x900000000, fd=-1, size=0, width=2359296, height=0, stride=1280, slice=800 [DEBUG] Buffer[13]: vaddress=0x7f8337a000, fd=120, size=2359296, width=1280, height=800, stride=1536, slice=1024 [DEBUG] Buffer[14]: vaddress=0x7f8307a000, fd=0, size=1280, width=1536, height=1024, stride=0, slice=-2012890624 [DEBUG] Buffer[15]: vaddress=(nil), fd=-1, size=1536, width=0, height=-2014372240, stride=85, slice=0 [DEBUG] Found 1 valid buffers out of 16 allocated
The debug output shows that only Buffer[13] has valid data with proper values: vaddress=0x7f8337a000, fd=120, size=2359296, width=1280, height=800. The library reports success but returns mostly unusable buffers, which causes the OMX encoder to fail during initialization since it expects a minimum number of valid buffers, of which, it only found 1 before (hence, the message,
WARNING: Encoder expecting(16) more buffers than module allocated(1)
).I implemented a temporary workaround that identifies the single valid buffer from the corrupted pool and duplicates its metadata to create 8 usable buffers (I was able to decrease the expected to 8 from 16, but I couldn't get any lower). This involves moving valid buffers to the front of the pool and zeroing out remaining buffers to prevent crashes. The workaround allows the OMX encoder to initialize with the minimum required buffers instead of failing with only 1 valid buffer.
This is clearly a temporary solution and not a permanent fix. The root cause lies in the
mpa_ion_buf_pool_alloc_bufs
function, which appears to have a memory management issue. I was unable to find the implementation of that function, so I was unable to debug any further.I'm not sure if something else regarding the configuration on my Voxl was causing that issue. I was unable to find anything, but let me know if I'm missing something very obvious.
-
@Jordyn-Heil , i have seen this issue before. This is because you are compiling the camera server app against one version of libmodal-pipe library, but a different version is running on voxl2. There was an api change in the header file causing this issue. The solution is to use latest libmodal pipe (from dev) while cross compiling and running on voxl. You dont need to update the whole sdk, just update that lib.
Alex
-
@Alex-Kushleyev, thank you! That solved my issue!
I can successfully stream the grey MISP streams over RTSP from my color AR0144s, but I am unable to stream colored frames over RTSP. Is the processing pipeline to do this implemented, and if so, what modifications need to be made to send and access these streams over RTSP?
Also, I believe you mentioned earlier that the VOXL GPU can handle panoramic stitching between camera streams? I am currently using the CPU to do panoramic stitching, but there is too much lag for my application, so I would like to try and use a compute shader to offload the work to the GPU. What is the best way to do this on the VOXL2?
-
@Jordyn-Heil , yes i need to add the part that feeds the frames into the video encoder for the ar0144 color use case. it's not difficult, i will do it this week.
Regarding GPU usage, I have only done OpenCL programming on VOXL2, but OpenGL is also supported, I can probably find some resources to help with that. Which one of the two do you prefer? OpenCL would probably be easier (for me to help you) since we have done much more with that already.
Alex
-
@Alex-Kushleyev, great, thanks for doing that!
OpenCL should work just fine!
-
@Jordyn-Heil , sorry for the delay.
I will provide an initial example of getting 4 AR0144 streams into a standalone app and doing something with all of them on the GPU using OpenCL. This should be a nice example / starting point for multi-camera image fusion.
Initially I will just use our standard mpa approach to get the image buffers from camera server to the standalone app. After that i will also try to do the same using the ION buffer support (which is already in camera server) which will allow us to map the buffers to GPU without doing any copy operations or sending image data over pipes. using ION buffer sharing will allow to further reduce the cpu and memory bandwidth usage, but it's still a bit of experimental feature and will need some more testing.
Please give me a few more days.
Alex
-
@Alex-Kushleyev, no worries! I appreciate your help on this!
In the meantime, would you be able to share the CAD file for the M0173 breakout board? I was not able to find it anywhere.
-
Hi @Jordyn-Heil
We try to keep this regularly updated, but sometimes miss some... in this case, M0173 is therehttps://docs.modalai.com/pcb-catalog/
-
@Alex-Kushleyev, were you able to update the dev build of voxl-camera-server to encode the ar0144 color streams?
-
Sorry for the delay.
In order to test the AR0144 Color RTSP stream, you can use
voxl-streamer
. I just pushed a small changed that fixed the pipe description for AR0144 color output from YUV to RGB. The current implementation publishes RGB, but the mpa pipe description was set to YUV, sovoxl-streamer
did not work.So right now, i can run voxl-streamer:
voxl2:/$ voxl-streamer -i tracking_front_misp_color Waiting for pipe tracking_front_misp_color to appear Found Pipe detected following stats from pipe: w: 1280 h: 800 fps: 30 format: RGB Stream available at rtsp://127.0.0.1:8900/live A new client rtsp://<my ip>:36952(null) has connected, total clients: 1 Camera server Connected gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm gbm_create_device(156): Info: backend name is: msm_drm rtsp client disconnected, total clients: 0 no more rtsp clients, closing source pipe intentionally Removed 1 sessions
voxl-streamer will receive the RGB image and use the hardware encoder to encode the images into the video stream.
The corresponding changes are on
voxl-camera-server
dev branch of voxl-camera-server.Will that work for you, or do you need the
voxl-camera-server
to publish the encoded stream? In that case, since voxl-camera-server only supports encoding YUVs, I would need to conver to YUV then feed the image into the encoder.I will follow up regarding GPU stuff in the next post.
Alex
-
I am working on some examples of how to use the GPU with images received from the camera server via MPA (raw bayer, yuv, rgb).
I just wanted to clarify your use case.
- Currently, the camera server receives RAW8 bayer image from AR0144 camera (it can be RAW10/12 but it is not yet working reliably, different story)
- the camera server then uses CPU to de-bayer the image into RGB
- RGB is then sent out via MPA and can be viewed by voxl-portal (which supports RGB) or can be encoded via
voxl-streamer
, which also supports RGB input.
Now, in order to do any image stitching, the following would need to be done in the client application
- subscribe the the multiple camera streams (whether bayer or RGB)
- load camera calibration (intrinsic) for each camera
- load extrinsic parameters for each camera
- load LSC (lens shading correction) tables for each camera (or could be the same for all, close enough)
- for simplicity, lets assume that there is one large output buffer is allocated for all cameras to be pasted into (panorama image). and when individual images come in, the job of the app is to do the following:
- apply LSC corrections to the whole image
- (optional) perform white balance corrections (to be consistent across all cameras) (would need some analysis across all images)
- undistort the image according to the fisheye calibration params
- project the image according to the extrinsics calibration into the stitched image
this would ignore the time synchronization aspect for now and basically as each image comes in, the stitched image is updated.
Also, for your algorithm development, do you work with RGB image or YUV?
I am planning to share something simple first, which will subscribe to the 3 or 4 AR0144 cameras, load some intrinsics and extrinsics and overlay the images in a larger image (without explicitly calibrating the extrinsics). Then we can go from there.
Alex