Hadron ov64b snapshots have a vertical image artifact
-
Can you please clarify what you meant by the following statement in your previous post:
"Also as a side note, the metadata seems to not be recording when using voxl-record-raw-image -j"
Thanks!
-
FYI the voxl-mpa-tools updates have been merged to dev (including the jpeg saving option and option to save timestamp, exposure, gain in the filename -- i think that is what you were referring to). One small change, the timestamp in the filename changed from milliseconds to microseconds.
https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-tools/-/tree/dev
-
@Alex-Kushleyev I have been doing some tests and the updates to the camera server and mpa tools seem to work great, thank you! What I meant about the metadata not recording is this data only shows up when using the snapshot command, and does not show up when using the jpeg flag in the voxl-record-raw-image command:

Also noticed the auto exposure doesn't seem to settle unless the camera's feed is opened on the voxl-portal. Here are two images I took - voxl-camera-server was running for a considerable time before taking either one.
Before looking at the live stream in the portal:

After looking at the stream in the portal for a few seconds (I could see the exposure settle down)

Didn't mess with voxl-camera-server at all in between taking the two images.
-
We can add the jpeg meta to the jpeg file saved from
voxl-record-raw-image. which fields specifically are good to have?Also, regarding the exposure settling - can you please provide the exact configuration you are running? specifically, which are enabled (misp, small_video, snapshot) and is auto exposure set to "isp" or not -- maybe provide a camera server config? When using MISP, i believe there is a case that if none of the streams are being used, the AE won't run, but we can fix this for the case of enabled snapshot.
Alex
-
@Alex-Kushleyev Our application doesn't specifically need the metadata, just figured I should point it out in case it was intended to be there. It may be useful to have shutter speed, ISO, aperture, and date/time though.
Ah yes that seems to be the issue, we are using MISP, preview, and snapshot. It would be useful to have the AE run without us viewing the stream. Here is the conf file:
{ "version": 0.1, "fsync_en": false, "fsync_gpio": 109, "cameras": [{ "type": "boson", "name": "boson", "enabled": true, "camera_id": 0, "fps": 30, "en_preview": true, "en_misp": true, "preview_width": 640, "preview_height": 512, "en_raw_preview": true, "en_small_video": false, "en_large_video": false, "ae_mode": "off", "en_rotate": false, "misp_width": 512, "misp_height": 640, "misp_venc_enable": true, "misp_venc_mode": "h264", "misp_venc_br_ctrl": "cqp", "misp_venc_Qfixed": 30, "misp_venc_Qmin": 15, "misp_venc_Qmax": 50, "misp_venc_nPframes": 29, "misp_venc_mbps": 2, "misp_venc_osd": false, "misp_awb": "off", "misp_gamma": 1, "misp_zoom": 1, "gain_min": 100, "gain_max": 100 }, { "type": "ov64b", "name": "hires", "enabled": true, "camera_id": 1, "fps": 30, "en_preview": true, "en_misp": true, "preview_width": 9216, "preview_height": 6912, "en_raw_preview": true, "en_small_video": false, "en_large_video": false, "en_snapshot": true, "ae_mode": "isp", "gain_min": 100, "gain_max": 100, "misp_width": 9216, "misp_height": 6912, "misp_venc_enable": false, "misp_venc_mode": "h265", "misp_venc_br_ctrl": "cqp", "misp_venc_Qfixed": 38, "misp_venc_Qmin": 15, "misp_venc_Qmax": 50, "misp_venc_nPframes": 29, "misp_venc_mbps": 30, "misp_venc_osd": false, "misp_awb": "auto", "misp_gamma": 1, "misp_zoom": 1, "ae_desired_msv": 75, "exposure_min_us": 1000, "exposure_max_us": 1001, "exposure_soft_min_us": 5000, "ae_filter_alpha": 0.6, "ae_ignore_fraction": 0.2, "ae_slope": 0.1, "ae_exposure_period": 1, "ae_gain_period": 1, "max_request_queue_depth": 6, "en_snapshot_width": 9216, "en_snapshot_height": 6912, "exif_focal_length": 3.1, "exif_focal_length_in_35mm_format": 17, "exif_fnumber": 1.24, "snapshot_jpeg_quality": 90 }] } -
@cguzikowski , got it, thanks for the clarification. I will add a camera server config param that would force the auto exposure to run all the time, regardless of whether the streams are used or not. this should be simple.
Also, i was wondering if you decided that the ISP snapshot is good enough for you or you want to explore saving the RAW bayer and processing offline? We have been experimenting with some approaches for offline processing and I can share some scripts, which have some flexibility on how much to de-noise , sharpen, etc. I am also going to add the LSC (lens shading correction) for the offline processing (and later into real-time misp pipeline) to correct for those artifacts that you saw where the colors change across the image.
Alex
-
@Alex-Kushleyev Sorry for the delayed response, but I believe the ISP snapshots are good enough quality for us, but could I still get the scripts to mess around and test with?
Also is the most recent camera server here the version with the auto-auto exposure? If so, what is the parameter to use to activate that?
-
@Alex-Kushleyev Also just noticed the vertical artifact again. The artifact is only present in the snapshots and not the raw images saved as jpeg - will definitely need those post processing scripts now.
Here is the raw image:

And here is the snapshot:

-
@cguzikowski , thanks for following up.
I don't know why the ISP is outputting the jpeg with the issue - that is outside of our area of expertise. It could be a bug in the ISP or JPEG encoder.
I will get together some scripts i have been using for testing. I need add the LSC (lens shading correction) otherwise the colors look wrong and also image gets darker towards the edges. In order to apply LSC correction, we need a map (either look-up table or a poly fit) of each channel's response as a function of pixel coordinate (or radius) -- this needs to be calibrated (not for each camera module, but for each camera type + lens type). So after the calibration, my results should apply to your camera as well.
I will follow up early next week.
Alex
-
@cguzikowski , quick question for you. For your application, does it matter how much time (reasonable) it takes to debayer the image? some of my offline processing scripts are not optimized for real-time operation, but are very flexible to use and the full resolution image from ov64b is huge and it may take a few seconds to process without optimizations.
We also have opencl code that runs offline pretty quick on an nvidia GPU, but it does not have as many tuning knobs.
I guess it all depends on whether you are using the images right on the voxl2 or just collecting and analyzing offline at a later time.
Alex