Hi @jon , do you always see the following warning right before the seg fault? (and never see it when there is no seg fault?)
@jon said in voxl-logger seg fault on shutdown:
WARNING, _stop_helper_and_remove_pipe timed out joining read thread
Alex
Hi @jon , do you always see the following warning right before the seg fault? (and never see it when there is no seg fault?)
@jon said in voxl-logger seg fault on shutdown:
WARNING, _stop_helper_and_remove_pipe timed out joining read thread
Alex
Hi @qubotics-admin ,
Our understanding is that the voxl2_io driver should present itself as a generic PWM output (up to 8 channels) and you should be able to map any function you need to it. Although we have not explicitly tested every function. Can you please clarify which version of voxl-px4 you are using and elaborate on the "limited options" that you have for selecting the PWM function (what are you expecting to be able to set and what are the options).
Alex
Which TOF sensor do you have plugged into VOXL2 Mini (there are two options) and what is the camera type listed for the TOF entry in voxl-camera-server.conf? there may be a mismatch..
Config file TOF options are :
pmd-tof for the (EOL) TOF V1 : https://docs.modalai.com/M0040/pmd-tof-liow2 for the new TOF V2 : https://docs.modalai.com/M0169/ (https://docs.modalai.com/M0178/) - two different variants that differ in the connector type that connects them to VOXL2Alex
Hi @Rowan-Dempster , you are right, actually, there is no option right now to log from an ion buffer using voxl-logger. However, voxl-replay has an option to send camera frames as ion buffers. I have not tested it recently, but you could give it a try : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-logger/-/blob/extend-cam-logging/tools/voxl-replay.cpp
For offline processing, it should not matter much whether you are using ion buffers or not. There would be a bit more cpu usage, but hopefully not too much. Having voxl-replay support cam playback as ion buffers is probably more important than using ion buffers for logging, since then your offline processing pipeline uses the same flow (ion buffers) as the live vio pipeline.
We may add logging from ion buffer, but it's probably not a high priority.
By the way, i wanted to mention one detail. I recently made a change in camera server dev branch to allocate the ion buffers that are used for incoming raw images as uncached buffers. This actually happens to reduce the cpu usage (so that cpu does not have to check / flush cache before sending the buffer to the GPU). The cpu reduction was something like 5% of one core per camera (for a large 4K image). For majority of hires camera use cases, this is beneficial, because usually the cpu never touches the raw10 image before sending to GPU.
However, when you are logging the raw10 image to disk using voxl-logger, the cpu will have to read the contents of the whole image and send it via the pipe - uncached reads are more expensive. There will be increased cpu usage (i dont remember how much), but it should still be fine unless you are trying to log very large images at high rate. If you wanted to profile the cpu usage while logging, you can just disable making the raw buffers uncached and see if that helps. I have not yet figured out a clean way to handle this, maybe i will add a param for type of caching to use for the raw camera buffer.
look for the following comment in https://gitlab.com/voxl-public/voxl-sdk/services/voxl-camera-server/-/blob/dev/src/hal3_camera_mgr.cpp :
//make raw preview stream uncached for Bayer cameras to avoid cpu overhead mapping mipi data to cpu, since it will go to gpu directly
Alex
Hi @Rowan-Dempster ,
Yes there are still some perf optimizations that i need to merge to dev. I will take a look at what is left...
Regarding intrinsics question.. In a perfect world, if you have an image 4040x3040 and perform intrinsic calibration (focal length, principal points, distortion), then if you scale the resolution by a factor of N, then your focal length and principal points will scale by exactly N and the fisheye distortion parameters will remain the same (because the fisheye distortion is a function of angle, not pixels and downscaling the image will not change the angle of the pixels, as long as the focal length and principal points are downscaled accordingly).
In practice, if you want to map calibration from one resolution to another resolution, you have to be careful and know exactly how the two resolutions relate to each other. Specifically, there may be a pixel offset in second resolution. Take a look at the two resolutions that we are dealing with : 4040x3040 and 1996x1520 and you will notice that 4040/1996 != 3040/1520 -- so what happened? Why arent we using 2020 as the exact factor of 2 binning of 4040?
The answer relates to the opencl debayering function, which uses hw-optimized gpu instructions, which only accepts certain widths without having to adjust image stride before feeding the buffer to the GPU (which would have some cpu overhead). It so happened that 1996 was the closest to 2020 which was an acceptable resolution for the specific opencl debayering function. The camera would happily output 2020x1520 if we configured it to do so.
So the next question is then how does 1996 pixels related to the perfect size of 2020 (which would be the exact N=2 downscale of 4040). The answer is (and is very specific to how we implemented it at the time of writing the driver..) is that to get to 1996 from 2020, we cut off pixels on the right of the image (essentially offset 0 but width of image is reduced to 1996). This actually means that the x principal point will be exactly in the same location (divided by 2). This would not be true if the 1996 resolution was shifted (centered around 2020/2 = 1010, which is something potentially more reasonable to do
).
So you should be able to scale the focal length AND principal points by 2 to get the calibration for 1996x1520 resolution, if you are converting intrinsics calibrated at 4040x3040. But don't take my word for it, you should double check it and calibrate in both resolutions. After confirming, you should not have to do two calibrations for each camera.
In future, we may enable the full (uncropped) binned resolution, but it's a pretty small crop, so it may not be worth it.
Alex
Here is an outline of the data flow that you may want to start with:
option 1: simple:
option2 : more complex
both options:
Hi @Rowan-Dempster ,
The only things that you cannot change after the raw10 images are collected are:
voxl-inspect-camera or any other client to get the yuv or grey stream going (or even encoded)I would suggest not collecting too many data sets until you actually get the pipeline working, as you may realize that there is an issue in the data sets or something is missing. Probably best to focus on the processing pipeline..
Another thing to keep an eye on is what you are logging (bandwidth). I did some tests a while ago and voxl2 write speed to disk is quite high, about 1.5GB/s, but you can run out of disk space pretty quickly. However, it should definitely be able to log 4040x3040@30 + 1996x1520@30 (or @60). will probably need to use the ion buffers to log the raw10 images (which is supported, i just need to test), to skip the overhead for sending huge images over the pipes. Camera server already publishes the raw bayer via regular pipes and also ion buffers.
I am going to look over the components needed for this and make sure they are in good state:
But either way, you should be able to start with logging tests and just see if you can playback the logs and get QVIO to do something reasonable from the log playback on target. To bypass MISP, you could just log the output of MISP for now (grey or normalized), so you have less components in the initial test pipeline. then build it up to include debayering, etc.
Perhaps it is easy to log the raw10 bayer + grey or normalized image, so that you have the AE issue solved as well (making sure the AE is running), then for playback you can choose either normalized (feed directly into voxl-qvio-server) or raw10, which will need some debayering and other processing. It is good to have choices. But i would suggest starting with lower resolution first until you also double check ability to log 4K raw images.
Alex
Please see the following commit where we recently enabled publishing the normalized frame from IMX412 and IMX664 camera via regular and ION buffers: https://gitlab.com/voxl-public/voxl-sdk/services/voxl-camera-server/-/commit/c42e2febbc6370f9bbc95aff0659718656af6906
The parameters for 1996x1520 look good, basically you will be getting 2x2 binned (full frame) and then further down-scale to 998x760. since you are doing exact 2x2 downscale in misp, you can also remove interpolation, which will make the image a bit sharper, you can see this for reference: link -- basically change the sampler filter mode from linear (interpolate) to nearest. If you use non-integer down-sample, keep the linear interpolation.
Regarding the resolution to use with VIO.. i think the 998x760 with nearest sampling should behave the same or better than AR0144 with 1280x800 resolution, mainly because the IMX412 has a much bigger lens (while still being pretty wide), so image quality is going to be better (of course, you need to calibrate intrinsics). Also the 4:3 aspect ratio may help capture more features in the vertical direction. That of course does not account for rolling shutter effects..
There can definitely be benefit in going up in resolution and using 1996x1520, but you kind of have to use the extra resolution correctly.. typically you would detect features on lower resolution image and then refine using the full resolution (also for tracking features). However, in practice, often some small blur is applied to the image to get rid of pixel noise, etc, so very fine features won't get picked up. Unfortunately, we do not know exactly what QVIO does internally. it may do some kind of pyramidal image decomposition to do these things in a smart way. You should try it and check the cpu usage.
Using MISP you can downsample and crop (while maintaining aspect ratio) to any resolution, so it's easy to experiment.
If i had to test QVIO at different resolutions, i would log raw bayer images and imu data using voxl-logger and then use voxl-replay + offline misp + offline qvio to run tests on the same data sets with different processing parameters. This may sound complicated, but it's really not:
So, if you are really serious about using hires camera for QVIO, since there are a lot of unknowns, you should consider setting up an offline processing pipeline, so that you can run repeatable tests and parameter sweeps. It requires some upfront work, but the pay-off will be significant. You can also use the offline pipeline for regression testing of performance and comparing to other VIO algorithms (which just need the MPA interface). We can discuss this topic more, if you are interested.
imx412_fpv_eis_20250919_drivers.zip are the latest for IMX412. We should really make them default ones shipped in the VOXL2 SDK, but we have not done it.
Since you are maintaining your own version of voxl-camera-server, you should add them to your voxl-camera-server repo and .deb and install them somewhere like /usr/share/modalai/voxl-camera-server/drivers. then modify the voxl-configure-camera script to first look for imx412 drivers in that folder and then fallback to searching /usr/share/modalai/chi-cdk/. In fact this is something I am considering, as maintaining camera drivers in the core system image is less flexible.
EDIT: i guess the older version of the imx412 drivers are already in the repo, so you can just replace them with new ones in your camera server repo: link
Let me know if you have any more questions. Sounds like a fun project 
Alex
@Myles-Levine , if you are masking out 90% of down-facing camera, then it is probably useless any movement of the features will get it out of the unmasked region very easily and feature will be dropped.. It may hurt VIO as there may be features going in and out, only trackable for a few frames (just adds to the complexity).
You should try to move the down-facing camera to free up it's FOV or even have it facing slightly angled to the back of the drone..
Alex
Hello @Catalystmachine ,
Sorry for the confusion. We have updated the page to be more clear that Boson is not included, see below: "(No Boson)" was added to the kit description.
The title of the product already states "VOXL 2 MIPI Boson+ Adapters for Thermal IR FPV" as well as the Kit description used to say (Before we added (No Boson)) : "PCB Adapters Only..."
The price of the Boson cameras varies significantly, depending on the specifications. You can contact a distributor to get more details. For example : https://www.oemcameras.com/product-category/thermal-imaging-cameras/thermal-imaging-cores/flir-boson-series-htm/boson-plus/ (you can find more at https://oem.flir.com/contact/find-a-dealer/)
Please note that Boson+ series of the Boson sensor is required for compatibility with VOXL2.
Regarding the pricing of the adapter kit, we do offer discounts in higher volumes, if you are interested, please send us a request: https://www.modalai.com/pages/contact-us

@Kashish-Garg-1 , please see the following discussion regarding M0149 without an IR filter as well as a lens part to reference if you want to buy replacement lenses only.
If you would like to use M0149 (as opposed to M0166), the best approach would be to buy the M0149 cameras in original configuration (with IR filters) as well as additional lenses without IR filter and then swap the lenses (use contact form for a custom order with lenses). Sorry, we won't modify the standard M0149 units. I hope this helps!
Please note that lenses that are shipped with M0166 are about 1mm shorter in Z direction (along the optical axis) compared to the lenses shipped with M0149, however the lens specs are very similar. Please perform testing to ensure compatibility for your application.
https://forum.modalai.com/topic/4826/msu-m0149-1-ir-filter
Alex
@groupo , sorry, you are right, an update is overdue. Please give us a few more days to provide an official response.
Alex
Hi @austin-c ,
I ran some tests with Tmotor MN4006-23 380kV and 15-inch MS1503 propeller, using M0138 ESC. Tests were done with a battery and a power supply. Please see results below.

Please note that, in theory, it is possible to decelerate the motor faster without regenerative braking (by applying more power, but out of phase), however we do not have this implemented.
If the motor deceleration is slowed down due to lack of regenerative braking, the attitude control will be affected because the motor response is highly asymmetric for acceleration vs deceleration. Overall, this will result in effectively reducing the average responsiveness of the motor, so you would need to decrease control gains and probably will not be able to handle large disturbances very well and cause oscillations if not tuned properly.
Please review the plots (which are not surprising), and we can discuss further..
Alex
6S battery, Power step test:

6S battery, RPM small step test

6S battery, RPM large step test

11A Power Supply, Power step test:

11A Power Supply, RPM small step test

You should not have any unmasked static objects appear in the camera view because they may have features that will be tracked and will confuse the algorithm because these features will be stationary with respect to the drone even if the drone is moving.
open vins should support masking parts of the camera(s) to ignore any features in this region. I am not sure how to enable the mask, will double check with colleagues (you could also do a search..).
worst care scenario, the part of the image to be ignored should be blacked out before feeding into open vins - that would be the solution if the mask is not supported. Hopefully that is not needed and mask can be used.
Alex
@SKA , I am not sure. please disable all streams except preview and try again. specifically disable:
Double check to make sure:
en_raw_preview: true
en_preview: true
en_misp : true
It seems you have at least small_video enabled.
What camera resolution are you requesting? (preview_width, preview_height)
Alex
@ey , please send us a request via http://www.modalai.com/pages/contact-us
Alex
@austin-c , i don't have any concerns for disabling active freewheeling other than the fact that the responsiveness of the ESC will be reduced, as the RPM reduction will be purely due to air drag acting on the propeller (and small friction in the motor).
I can set up the MN4006 motor with a 13 or 15 inch propeller, do a RPM control tune and compare the RPM response results with regen on and off. This is actually pretty quick to do, i will try to do it in the next few days.
Please ping me if you don't hear back by early next week 
Alex
@Jetson-Nano , since we currently have no plans to do the MIPI integration of this sensor, you always have the option to use USB connection and voxl-uvc-server to publish images via MPA. You would need to check if the sensor supports the standard UVC interface and try it out...
Alex
Also, you should note - the motor's open loop RPM speed increases at the same power setting -- this is a consequence of not having active freewheeling. This does not mean, though, that if active freewheeling (regen braking) is disabled, that you would get higher max rpm or more efficiency. It just changes the motor pwm % to rpm mapping.
Alex
@austin-c , i just checked and actually we have an option to disable regen braking, it is just not available yet via a param. Here is the test result with same setup (no prop) with regen braking disabled. You can see that the motor takes a long time to coast down, but there is no regen voltage spike or regen current at all. If you application does not require rapid rpm change (high to low), then it will work fine.
We have not tested this in a while, but at some point we did testing for an application with a smart battery which did not like being re-charged with the regen current.
I will see if we can enable this via param or worst case scenario I can share the latest firmware with regen off.
Regarding your other questions:
Also, since your motor is low kv, i just want to make sure you saw this document (you probably did, just double checking) : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-esc/-/blob/master/voxl-esc-tools/doc/low_kv_motor_tuning.md
Actually the app note for low kv tuning was using the Tmotor MN4006 motor with even larger 15 inch propeller (see Tuning Example section). It should work fine with appropriate parameters, we can help tuning if you tell me the exact propeller.
motor MN4006-23 380kV, 18N24P winding configuration
12 pole pairs (24/2)
15-inch propeller MS1503
6S battery voltage
Alex
