Hello @Gerhold-Ten-Voorde ,
This is not related to the quality, but instead, it is a supply chain decision.
Alex
Hello @Gerhold-Ten-Voorde ,
This is not related to the quality, but instead, it is a supply chain decision.
Alex
@jbiscan21 , yes you can add the TOF sensor to Starling 2 Max, but you will also need a new front plastic mount that will hold the hires and tracking cameras as well as the new TOF sensor.
You can order the mount (which will be a drop in replacement for the original camera mount) with the TOF sensor. If you do want to proceed with the TOF sensor + plastics order, we will need to double check that all the necessary screws are included. You would need to send an email for a custom order to start this off and mention this thread and that we need to include mounting hardware. https://www.modalai.com/pages/contact-us
(related post : https://forum.modalai.com/topic/5157/c29-configuration)
See more detailed pictures.



Alex
@jbiscan21 , sorry i am surprised that we don't have a photo of the Starling 2 Max with the TOF mounted. Please see below. The TOF sensor is oriented vertically on the front face, right next to the front hires and tracking cameras.
If you need to mount the sensor somewhere else, you can use an available extension flex (it only works for the TOF V2 sensor) : https://www.modalai.com/products/msu-m0170

@leandro , yes we do have M0195 boards available. Please send us a contact inquiry to get a custom order (https://www.modalai.com/pages/contact-us).
The connectors J01, J23, J45 and J67 are not populated - we do not have a build with those connectors. You would need to add them yourself.
Also, we are currently not (yet) officially supporting the ToF pills on this board, there is limited internal testing being done. We may be able to provide limited support for that. If you are interested, i can find out what the status of this effort is.
MCBL-00128 we have is only 75mm, unfortunately.
There is a way to make the TOF V2 work on VOXL2 mini, however it is a bit of a hack (which temporarily enabled a transition to the new TOF V2 while using old camera connectors, which did not provide enough power).
Note that if you use M0195 camera front end (or M0188), then you will not be able to use the TOF V2 because both camera connectors on voxl2 mini would be used up.
What's included in M0169 kit (from our shop page linked above)

@pmeras , the connector specs are provided here : https://docs.modalai.com/micro-coax-user-guide/ . Specifically, the connector part number is DF56C-26S-0.3V(51).
You can find some more information regarding the cable pinout for our hires camera here : https://docs.modalai.com/M0186/
We do not manufacture custom cables, just the ones used in our VOXL camera ecosystem.
Alex
@Mason-N , sorry for the delay. Please contact us via https://www.modalai.com/pages/contact-us for spare parts you cant find in the shop.
For 5-in Seeker, the propellers are the following, so you can purchase them directly from other vendors if you wish : HQProp DP5.1X4.1X3 3-blade 5.1In Poly Carbonate Propeller (it looks like we do have some in stock, as well)
Also, checking on the documentation...
Alex
@Matt69 , Do you need the additional PWM outputs to be connected via PX4? If not, perhaps connecting M0065 to a cpu uart and communicating directly would be the simplest (i can help you with that).
Do you have spare SLPI UARTs?
Not sure if PX4 would support multiple voxl2-io drivers running, i guess that should work? I don't think we have tried it. if you have spare slpi uart ports, just give it a shot? There may be a problem with two actuator drivers having the same name.
Alex
@AndrewC , sorry for the delay.
There are no hardware modifications to VOXL2 before installing it into Starling 2 Max. You would just need to configure SKU on your new VOXL2 board (https://docs.modalai.com/sku/) and just just reflash the latest SDK to start off clean. During the SDK install, there are a few steps (voxl-configure-mpa) that will load all the needed configurations for your SKU.
Please let us know if you have any issues with that.
Alex
In our voxl-camera-server, we have a default intrinsics config for the IMX412 camera / lens as follows (focal length approximation is based on a several calibrations). Basically it is a fisheye lens with about 2000px focal length at full resolution (half of that at half resolution, etc). The camera full sensor width is 4056 (hence the x principal point is exactly half of that 2028). In some cases we use slightly trimmed frame (4040 in width). But in any case, that should be enough for a good starting point.
lens_cal.width = 4040;
lens_cal.height = 3040;
lens_cal.fx = 1999.0;
lens_cal.fy = 1999.0;
lens_cal.cx = 2028.0;
lens_cal.cy = 1520.0;
lens_cal.is_fisheye = 1;
Alex
@cguzikowski , quick question for you. For your application, does it matter how much time (reasonable) it takes to debayer the image? some of my offline processing scripts are not optimized for real-time operation, but are very flexible to use and the full resolution image from ov64b is huge and it may take a few seconds to process without optimizations.
We also have opencl code that runs offline pretty quick on an nvidia GPU, but it does not have as many tuning knobs.
I guess it all depends on whether you are using the images right on the voxl2 or just collecting and analyzing offline at a later time.
Alex
hi @jameskuesel , nice to hear from you
Please note that we recently added new functionality to voxl-record-raw-image to save to jpeg using the software jpeg encoder (turbojpeg, similarly to how it's done in voxl-portal for sending the images to the browser):
voxl-record-raw-image -h
Record a raw image from an MPA pipe to disk
This is typically used for inspecting raw or YUV
image data formatting or for loading into something
like MATLAB or OpenCV for post-processing.
Optional arguments are:
-d, --dir {dir} directory to save output files in (default: /data/raw-images/)
-h, --help print this help message
-j, --jpeg convert to JPEG before saving (supports NV12, NV21, RGB, RAW8 formats)
-q, --jpeg-quality JPEG quality for compression (1-100, default: 90)
-n, --num-images {n} number of images to save from the pipe (default 1)
-m, --meta save metadata in file name (timestamp (us), exposure (us), gain)
-s, --skip {n} skip n frames before saving a new frame
VOXL2 should be able to encode up to 4x 4K30 streams (video encoder can do 8K30). So there is no limitations to encode two 4K30 streams (if the stream is larger than 4K, such as 4040x3040, still two of them should be fine). Since MISP supports up to 4 output channels, that should be ok, but it seems that youre issue is that you want to have different zoom on different channels. This is a feature we have also been discussing internally and should be easy to add a param that would behave like this: either shared zoom for all misp channels or individual zoom (via config and controlled via the control pipe). Is that something that would work for you? We could add this pretty quick.
The latest IMX412 driver ( which is here : https://storage.googleapis.com/modalai_public/temp/imx412_test_bins/20250919/imx412_fpv_eis_20250919_drivers.zip) has been optimized to get the following:
Once we release the new IMX412 drivers that do not affect GPS, we will provide better documentation of the existing drivers and difference.. but for now you should just use the "eis" driver for imx412, as i linked above.
Alex
@cguzikowski , thanks for following up.
I don't know why the ISP is outputting the jpeg with the issue - that is outside of our area of expertise. It could be a bug in the ISP or JPEG encoder.
I will get together some scripts i have been using for testing. I need add the LSC (lens shading correction) otherwise the colors look wrong and also image gets darker towards the edges. In order to apply LSC correction, we need a map (either look-up table or a poly fit) of each channel's response as a function of pixel coordinate (or radius) -- this needs to be calibrated (not for each camera module, but for each camera type + lens type). So after the calibration, my results should apply to your camera as well.
I will follow up early next week.
Alex
@Matt69 , I am sorry, we did not mainline it. However, you should be able to use this commit to enable the individual pwm channel limit control like a standard servo setup.
Alex
Hi @bschulzhf ,
voxl-inspect-cam -a command should not be generally used as it puts a lot of stress on the system. just use it to inspect the stream that you need, such as voxl-inspect-cam hires_down_large_color.
Also, in your camera config changes, it is not clear exactly what changes you actually made, since you did not provide a diff (but i can see that you probably disabled all the streams except for the preview stream). In any case, you should revert your changes and just inspect the stream you need to estimate the latency.
If you want to provide more information about your application, we can suggest appropriate camera server parameter changes. I do see that ae_desired_msv": 60 is potentially too dark (the target average pixel value for auto exposure control) - you can try increasing it and see if the image is too dark.
Alex
@Gerhold-Ten-Voorde , our lens holders are custom and we do not have any information we could share about those, but you may be able to find similar lens holders from other sources.
IMX412 : https://www.m12lenses.com/M12-Lens-Holders-s/61.htm
For IMX664, you may want to check Runcam fpv cameras, which appear to have similar lens holder / housing, however i don't know if those would have extra threads you need.
Also, please note that if you are going to try the spacer approach, make sure that there is absolutely no gap between the spacer, lens holder, otherwise light can enter the camera from the side and affect the image quality. Also make sure that even when using the default lens holder with IMX664, the mounting holes in the aluminum housing are covered, so that light cannot get inside.
Alex
@cguzikowski , got it, thanks for the clarification. I will add a camera server config param that would force the auto exposure to run all the time, regardless of whether the streams are used or not. this should be simple.
Also, i was wondering if you decided that the ISP snapshot is good enough for you or you want to explore saving the RAW bayer and processing offline? We have been experimenting with some approaches for offline processing and I can share some scripts, which have some flexibility on how much to de-noise , sharpen, etc. I am also going to add the LSC (lens shading correction) for the offline processing (and later into real-time misp pipeline) to correct for those artifacts that you saw where the colors change across the image.
Alex
We can add the jpeg meta to the jpeg file saved from voxl-record-raw-image. which fields specifically are good to have?
Also, regarding the exposure settling - can you please provide the exact configuration you are running? specifically, which are enabled (misp, small_video, snapshot) and is auto exposure set to "isp" or not -- maybe provide a camera server config? When using MISP, i believe there is a case that if none of the streams are being used, the AE won't run, but we can fix this for the case of enabled snapshot.
Alex
Can you please send a picture of the damaged ESCs? Which component blew up? Also what do the labels on the ESD bags from those ESCs say?
The 6s version of M0129 (mini ESC) should have a small label on the white 4-pin connector that says either -63 or -65 (meaning 6S and either 3.8V or 5.0V output for VOXL2 mini or VOXL2). We can also check using the serial number (label should be present on the ESC)
https://docs.modalai.com/voxl-mini-esc-datasheet/#specifications

Alex
@ralinaresg , here is what we have available :
https://storage.googleapis.com/modalai_public/modal_drawings/MRB/D0013-V2.STEP
Hi @the_engineer ,
Please see the responses to your questions below:
Is the drone suitable for such an application, or is there a drone better suited for this kind of application?
Is such indoor autonomous operation is feasible with the Starling 2 Max?
To what extent can this drone be programmed and extended for research or development purposes?
Is the drone programmable with Python or a similar programming language?
What charge time can be expected?
Is a docking station/ induction charging retrofit possible?
Is the drone capable of navigating in subpar lighting conditions?
What is the expected maximum flight time at maximum takeoff weight?
What are the operational temperature limits of the drone?
Does visual obstacle avoidance also work during remote controlled manual flight?
What is the expected delivery time for the system to Germany?