@SKA ,
Does this happen right away after you start voxl-camera-server ?
What raw "preview" resolution / fps are you requesting in camera server config?
Also, can you post output of dmesg (everything after you start voxl-camera-server)
Alex
@SKA ,
Does this happen right away after you start voxl-camera-server ?
What raw "preview" resolution / fps are you requesting in camera server config?
Also, can you post output of dmesg (everything after you start voxl-camera-server)
Alex
@SKA , that result / buffer error (i think) is showing up because the camera pipeline selects incorrect camera mode / resolution, perhaps you missed one of the steps in EIS instructions, either using the correct camera drivers or updating the override txt file.
Also, the latest docs for EIS are here, the old .md file is probably outdated : https://docs.modalai.com/camera-video/electronic-image-stabilization/
Alex
Hi @Rowan-Dempster ,
Please take a look at this example (you can build and run it too) : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-tools/-/blob/add-new-image-tools/tools/voxl-image-repub.cpp
This app can accept a regular image (RAW8 or YUV) and either re-publish it unchanged or crop and publish the result. Sometimes this is useful for quickly cropping an image that is fed into a different application that expects a smaller image or different aspect ratio.
The app shows how to subscribe and handle ion buffer streams.
Usage:
voxl2:/$ voxl-image-repub
ERROR: Pipe name not specified
Re-publish cropped camera frames (RAW8 or YUV)
Options are:
-x, --crop-offset-x crop offset in horizontal dimension
-y, --crop-offset-y crop offset in vertical dimension
-w, --crop-size-x crop size in horizontal dimension (width)
-h, --crop-size-y crop size in vertical dimension (height)
-o, --output-name output pipe name
-u, --usage print this help message
The cropped image will be centered if the crop offsets are not provided.
typical usage:
/# voxl-image-repub tracking --crop-size-x 256 --crop-size-y 256
/# voxl-image-repub tracking --crop-size-x 256 --crop-size-y 256 --crop-offset-x 128 --crop-offset-y 128
example re-publishing ion buffer image as regular image (which you can view in voxl-portal ) :
voxl-image-repub tracking_front_misp_norm_ion -o test
(you can see which ion pipes are available by running voxl-list-pipes | grep _ion)
Please note that without the previous fix that i posted above, the client process that receives and uncached ION buffer will incur extra CPU load while accessing this buffer. For example, the same voxl-image-repub client uses 1.7% cpu while republishing the normalized image (cached ion buffer), while using 7.3% cpu republishing an image from an uncached ION buffer. (cpu usage % using one of the smaller cores).
Please try and let me know if you have any questions.
I know this cached / uncached buffering may be a bit confusing, but i will document this a bit more to help explain it a little better.
Alex
@SKA, you should be able to update libmodal-pipe to latest package in the dev package repository : http://voxl-packages.modalai.com/dists/qrb5165/dev/binary-arm64//
@alan123 , yes, the 12-pin connector on VOXL2 (J19) was designed such that the first 6 pins are compatible with a lot of existing GPS recevier modules. Please double check the pins and you should use a 12-pin connector to plug into VOXL2 (may need to re-pin your cable).
Alex
Hi @mark ,
Thank you for the update. We will keep investigating the underlying issue that was a result of that questionable camera.
Meanwhile, if you think that the camera is defective and has no damage, you can fill out an RMA request and send it back for an exchange. I cant really comment on what could be causing this on the camera itself (it would be interesting to try it), but the camera is not serviceable by the user.
https://www.modalai.com/pages/rma
Alex
Hi @Rowan-Dempster ,
I started a new branch where I will be working on some performance optimizations in the camera server.
https://gitlab.com/voxl-public/voxl-sdk/services/voxl-camera-server/-/tree/perf-optimizations
In my initial testing, setting cpu to perf and when running one or two instances of the following:
voxl-inspect-cam tracking_front_misp_norm tracking_down_misp_norm
i was seeing:
1 instance (2 inspected streams) : 42% CPU (of one core)
2 instances (4 inspected streams): 58% CPU (of one core)
with the changes i just committed, i am seeing:
1 instance: 31% cpu
2 instances : 36% cpu
If you would like you can test camera server from this branch and see if you can reproduce the results.
notes:
_encoded stream from the tracking camera, it will work, but in dmesg you will see messages related to qbuf cache ops failed -- this is still under investigation and will be fixed soon.Meanwhile, I will work on an a simple example that shows the usage of ION buffers, I will try to share it a bit later today.
Alex
tagging @Eric-Katzfey for Ardupiot -related question..
Some answers:
3-1: VOXL ESC supports PWM input via aux pads (as you probably already know, but just noting it here) : https://docs.modalai.com/voxl-mini-esc-datasheet/#pwm-inputs--outputs. The range of the commands are 1000us=0% duty cycle, 2000us = 100% duty cycle (linear mapping). There is no calibration needed or supported (the MCU clock is pretty accurate).
3-2: Recommended PWM frequency (up to 1/(2000us) = 500hz as long as pulses have some off time between them), one-shot is supported.
3-3: i don't think that we have tested the mini ESC with Ardupilot in PWM mode, but the ESC should work very similarly to a standard ESC in PWM control mode
3-4: there is no way to set the zero-throttle PWM value in the ESC params, however here is my recommendation:
Also, we do not support Dshot input for VOXL ESCs. Spin direction reversal is not supported with the PWM commands.
UART protocol is recommended because you don't run into the PWM-related range issues and you also get telemetry back from the ESCs (and much higher command rate is supported).
Alex
You did not provide the actual error that you are seeing, however I could try to guess what it is (even if not, the details below should probably help you anyway). The default build of voxl-opencv package does not have python3 support. So if you are using gstreamer with opencv in python and that is the error, you should install the voxl-opencv package that I built with python3 support
Below, i will assume that you want to grab images from Boson part of the Hadron, however similar approach should apply to the RGB camera in Hadron.
First you should test ability to grab images without python. You may need to replace your camera # depending on what camera id your Boson is.
IMPORTANT: make sure that voxl-camear-server is not running while you are trying to use gstreamer.
systemctl stop voxl-camera-server
Tip: you can actually stream video using X forwarding with ssh. This should stream live Boson feed from voxl2 to your linux machine:
ssh -Y username@<voxl-ip>
gst-launch-1.0 qtiqmmfsrc camera=0 ! "video/x-raw,width=640,height=512,framerate=30/1" ! videoconvert ! autovideosink
display image directly in terminal as ascii:
gst-launch-1.0 qtiqmmfsrc camera=0 ! "video/x-raw,width=640,height=512,framerate=30/1" ! autovideoconvert ! aasink
and then finally, a python script that grabs h264 video from qtiqmmfsrc, decodes it and returns frames to python:
import time
import cv2
#get RGB (BGR?) directly
#stream='gst-launch-1.0 qtiqmmfsrc camera=0 ! video/x-raw, width=640,height=512,framerate=30/1 ! autovideoconvert ! appsink'
#get h264 -> decode -> BGR
stream='gst-launch-1.0 qtiqmmfsrc camera=0 ! video/x-h264,format=NV12,profile=high,width=640,height=512,framerate=30/1 ! h264parse ! qtivdec ! qtivtransform ! video/x-raw,format=BGR,width=640,height=512 ! autovideoconvert ! appsink'
print(stream)
vcap = cv2.VideoCapture(stream,cv2.CAP_GSTREAMER)
frame_cntr = 0
while(1):
ret, frame = vcap.read()
if ret:
frame_cntr += 1
print('got frame %d with dims ' % frame_cntr, frame.shape)
Hopefully, that works for you.
Final recommendation - if you use qtiqmmfsrc this way, the Boson data is processed in the Qualcomm ISP and unless your have a special tuning file for Boson, the processed output will have degraded quality. Boson, by default, outputs post AGC 8-bit image which is already processed and does not need to be further processed by the ISP. I am not sure whether you can get RAW8 data from qtiqmmfsrc (unmodified data from Boson).
We handle the above issue in voxl-camera-server by working with the RAW8 directly. We also recently started experimenting with 14bit pre-AGC data from Boson, which would need some processing before it is usable (if you are interested in that, i can share some more information).
Finally, if you would like to use voxl-camera-server, which is what we recommend and support, there is also a way to get encoded h264/h265 data into python (using our experimental pympa (python MPA bindings)). That is a topic for a discussion in another post, if you are interested..
Alex
@Rowan-Dempster , you should use a monochrome stream (_grey), since QVIO needs a RAW8 image.
If you are not using MISP on hires cameras, that is fine, you can start off using the output of the ISP.
You should calibrate the camera using whatever resolution you decide to try. This is to avoid any confusion, since if you using ISP pipeline, the camera pipeline may select a higher resolution and downscale + crop. So whenever you are changing resolutions, it is always good to do a quick camera calibration to confirm the camera parameters.
When using MISP, we have more control over which camera mode is selected, because MISP gets the RAW data, not processed by the ISP, so we know the exact dimensions of the image sent from camera.
Alex
We have not tried this recently, but it should work. Here are some tips:
1996x1520 (2x2 binned) mode has about 5.5ms readout time, which is pretty shortmvVISLAM_Initialize(...float32_t readoutTime ..)
@param readoutTime
Frame readout time (seconds). n times row readout time. Set to 0 for
global shutter camera. Frame readout time should be (close to) but
smaller than the rolling shutter camera frame period.
Here is where this param is currently set to 0 in voxl-qvio-server : https://gitlab.com/voxl-public/voxl-sdk/services/voxl-qvio-server/-/blob/master/server/main.cpp?ref_type=heads#L371
voxl-camera-server when you run it in -d mode (readout time in nanoseconds here):VERBOSE: Received metadata for frame 86 from camera imx412
VERBOSE: Timestamp: 69237313613
VERBOSE: Gain: 1575
VERBOSE: Exposure: 22769384
VERBOSE: Readout Time: 16328284
Good luck if you try it! let me know if you have any other questions. Please keep in mind that QVIO is based on a closed-source library from Qualcomm and our support of QVIO is limited.
Alex
@mark , thank you for confirming. The error is the same that i am observing. It should not matter whether it is 8 or 12 bit image (in your case it is 8 bit).
In my case the issue happens on M0154 board. I don't think M0054 vs M0154 makes any difference.
Just to clarify what is actually happening..
The issue starts with some CRC errors (interference between multiple cameras, it seems). In my case, the rate of CRC errors depends on the position of the ucoax cables. (you can also watch these CRC errors and move the coax cables for the second tracking camera).
If the error occurs at a critical point (such as beginning of the frame), the camera pipeline reports a critical problem and should produce an error in the camera pipeline.
The correct behavior should be that the problematic camera stops streaming and in latest SDK, it will actually be restarted and stream will recover. However, the camera is never stopped, instead it reports a duplicate image from the other tracking camera.
We are investigating the root cause, as it is somewhere low level in the camera stack (not in voxl-camera-server).
Alex
@mark , i am testing this.. unable to reproduce it with the 8-bit drivers, but i can reproduce the behavior with 12-bit AR0144 drivers.
can you please test again while having dmesg -w running in another terminal (with 8 bit standard AR0144 drivers) and see if you see something like the following when the camera feed starts being duplicated:
thanks!
[ 1765.339349] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1765.939747] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1766.034093] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1766.067988] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1766.068475] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1766.071001] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1766.072165] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1766.100680] CAM_ERR: CAM-ISP: cam_ife_csid_irq: 4903 CSID:3 ERROR_CRC
[ 1766.234702] CAM_INFO: CAM-ISP: cam_ife_csid_halt_csi2: 1931 CSID: 3 cnt: 1 Halt csi2 rx
[ 1766.234714] CAM_INFO: CAM-CSIPHY: cam_csiphy_subdev_handle_message: 22 subdev index : 0 CSIPHY index: 0
[ 1766.234719] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 103 PHY base addr= pK-error offset=0x8b0 size=11
[ 1766.234725] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR0 = 0xc4
[ 1766.234731] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR1 = 0x1
[ 1766.234737] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR2 = 0x0
[ 1766.234742] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR3 = 0x0
[ 1766.234748] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR4 = 0x0
[ 1766.234753] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR5 = 0x0
[ 1766.234758] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR6 = 0x7
[ 1766.234764] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR7 = 0x0
[ 1766.234769] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR8 = 0x60
[ 1766.234775] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR9 = 0x50
[ 1766.234780] CAM_INFO: CAM-CSIPHY: cam_csiphy_status_dmp: 117 CSIPHY0_IRQ_STATUS_ADDR10 = 0xc4
[ 1766.234784] CAM_INFO: CAM-CSIPHY: cam_csiphy_subdev_handle_message: 22 subdev index : 1 CSIPHY index: 0
[ 1766.234788] CAM_INFO: CAM-CSIPHY: cam_csiphy_subdev_handle_message: 22 subdev index : 2 CSIPHY index: 0
[ 1766.234791] CAM_INFO: CAM-CSIPHY: cam_csiphy_subdev_handle_message: 22 subdev index : 3 CSIPHY index: 0
[ 1766.234795] CAM_INFO: CAM-CSIPHY: cam_csiphy_subdev_handle_message: 22 subdev index : 4 CSIPHY index: 0
[ 1766.234798] CAM_INFO: CAM-CSIPHY: cam_csiphy_subdev_handle_message: 22 subdev index : 5 CSIPHY index: 0
[ 1766.234813] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4616 idx 3 err 5 phy 0 lane type:0 ln num:1 ln cfg:0x2 cnt 1
[ 1766.234816] cam_csid_evt_bottom_half_handler: 1 callbacks suppressed
[ 1766.234818] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status RDI0: cc0
[ 1766.234822] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status RDI1: 0
[ 1766.234824] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status RDI2: 0
[ 1766.234827] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status RDI3: 0
[ 1766.234830] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status TOP: 0
[ 1766.234833] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status RX: 80011
[ 1766.234836] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status IPP: 0
[ 1766.234839] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status PPP: 0
[ 1766.234842] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status UDI0: 0
[ 1766.234844] CAM_ERR: CAM-ISP: cam_csid_evt_bottom_half_handler: 4621 status UDI1: 0
Hi @mark , thanks again for confirming the results.
I will be testing this today and will get back to you.
Alex
@mark ,
Thank you for running the tests!
Regarding installing resistors without knowing what the function is - not recommended
. I will look into what this resistor is. Are you saying that after installing this resistor, the issue is not reproducible?
I was going to ask you to do one more test, if you can reproduce the issue. If the issue is indeed at the very low level (the same image is returned for both cameras into voxl-camera-server), then both instances of Auto Exposure algorithm would react to one camera's image changing. So the test would be..
tracking_front is the one that is being duplicatedtracking_down also changes at all.
tracking_down will not actually affect the image (since the stream is from tracking_front), but you should still see some changes while AE is converging after a sudden change. If there is no duplication of the image at the camera server, the AE behavior will not change for tracking_down camera when you cover up tracking_front. I hope that makes sense.Basically i am trying to figure out if this is a camera server issue or somewhere downstream.
Alex
@mark, thank you for the details. It should not matter whether M0054 or M0154 voxl is used.
Can you please do a test and disable both hires cameras and see if the issue is still reproducible? If the issue is gone, then enable one hires camera and test again.
Also, does setting cpu to perf mode help with this? (Voxl-set-cpu-mode perf)
I have never seen this happen with out stable camera configs, such as C28, so this is very odd!
Alex
Hello @Teon,
This is more of a general question, not specifically related to VOXL2. However, here is some information that you might find helpful.
Base station
Rover (VOXL2)
I will ask around to see if we have a better documentation for this..
Alex
@mark , we have seen this issue when using 10- or 12-bit drivers for the AR0144 tracking cameras. These drivers are shipped with the SDK but are not stable (for this exact reason).
Can you please confirm which ar0144 sensormodules you are using - you can check in /usr/lib/camera.
Alex
Hi @Rowan-Dempster,
We have been looking at some optimizations to help reduce the overall cpu usage of the camera server (not yet in the SDK). Let me test your exact use case and see what can be done.
Just FYI, we recently added support for sharing ION buffers directly from camera server, which means the camera clients get the images using zero-copy approach. this allows to save the cpu cycles wasted on sending the image bytes over the pipe, especially when there are multiple clients.
If you would like to learn more how to use the ION buffers, I can post some examples soon. One the client side, the API for receiving and ION buffer vs regular buffer is almost the same. One thing that will be different is that the shared ION buffer has to be released by all clients before it can be re-used by the camera server (which makes sense).
Even without the ION buffer sharing there is room to reduce the cpu usage, so I will follow up after testing this a bit. Regarding your question whether sending the image to multiple clients should not cause significant extra cpu usage -- yes you are correct, ideally it should not. However, the reason why it is happening here is related to how we set up ION buffer cache policy and currently when the CPU accesses the buffers for the misp_norm images (coming from the gpu), the cpu reads are not cached and the read access is expensive. Reading the same buffer multiple times results in repeated CPU-RAM access (for the data that would normally be already fully cached after the first send to the client pipe). However, in some other cases (when the buffer is not used by the cpu, but is shared as ION buffer and client sends the buffer directly to GPU), this approach results in even lower CPU usage. So i think we need to resolve the buffer cache policy based on the use case.. More details will come soon..
Alex
@Seungjae-Baek , the resolution of the camera should depend on the use case. However, it is also important to keep in mind what exactly you are doing with the images coming from the hires cameras. For example, if you set the resolution to 4K and you try to view those uncompressed images using voxl-portal, this process will be very taxing on the CPU for the following reason: when you view uncompressed images in voxl-portal, these images are encoded with a software JPG encoder and then sent from voxl2 to your browser. This process is very cpu-heavy. On the contrary, if you use h264 / h265 stream, it should be perfectly fine to encode 4K30 video to disk or even stream, since H264 / H265 encoding is done by a hardware encoder.
voxl-portal, actually does support showing h264 (but not h265) 30FPS streams, so that would be a lot more efficient for the CPU, since there would be no jpg encoding. Otherwise, if you are using raw frames (not _encoded) in voxl-portal , please keep in mind that you will always have a lot of cpu overhead. voxl-portal is designed for debugging / development purposes, so it's not necessarily the most efficient solution for video streaming. For real video streaming use cases, you would use h264 or h265 encoding and save to disk on voxl2 + stream the encoded video for remote viewing. You could encode the same camera source with two different resolutions / codecs / bitrates.
If you need help setting up a specific use case, please provide some details and I can help you further.
Alex