@Matt69 , yes you should splice the cable as needed. I will double check if we have plans to release a new cable, it will probably depend on the demand.
Alex
@Matt69 , yes you should splice the cable as needed. I will double check if we have plans to release a new cable, it will probably depend on the demand.
Alex
@john_t , sounds good. The DFS processing on VOXL2 happens on the DSP, so it's quite efficient and does not use up much CPU or any GPU.
By the way, you could also simulate lower resolution DFS by binning the 1280x800 into 640x400 and it would be very similar to using the older M0015 stereo modules, so that DFS configuration from M0015 stereo could be used without much change.
Since the AR0144 stereo is not a configuration we currently use / support (but we have tested before), we don't have a ready-to-go documentation for setting this up.
This would be kind of experimental test on your end, which we will support. At minimum, you would need:
If you wanted additional cameras, it may be chaper to get a M0173 bundle: such as https://www.modalai.com/products/m0173?variant=48528274424112 or similar (please check cable lengths)
Once you have the hardware, we can set up a similar test (based on your baseline, camera orientation) and help you get started.
Alex
A UBEC was used for the servo connections, and they SHOULD NOT be powered from the board. We will update that drawing, thank you for pointing that out
As this was a first time build we did create that custom cable to deliver power to the airspeed sensor. We will make a note for that in the layout
Appreciate you pointing these things out so we can make those adjustments!
@Jetson-Nano , can you please elaborate on what happened during the attempt to unbrick the voxl2 board?
@Matt69 , good question. let me ask someone else to help with that 
@SDSU_Drones
Just to close this out, I emailed you a review report. Hope that helps you along your way.
Thanks!
Vinny
Hello @newdroneflyer ,
In order to adjust focus of the hires camera, you need to do the following:

Here is the picture of the focus tool, it has two sizes - one for hires camera, one for tracking camera (if needed). You should have received one with the Starling2 (in an envelope, probably different packaging).

Alex
Does your YOLOv8 run on CPU or GPU?
There is no documentation on binning, but it is actually very simple : when we talk about 2x2 binning, we typically mean (unless otherwise noted) that the binning happens on the camera itself. Here is a diagram that shows how the pixels are typically binned in the RGGB bayer pattern : https://www.1stvision.com/cameras/IDS/IDS-manuals/uEye_Manual/hw_binning.html
Because the camera does binning internally, this particular camera is able to do processing faster and there is less data to send out, so the readout time is reduced. Please note that not all cameras that output 2x2 binned images actually reduce the read time - some cameras are limited by the analog ADC part, which still has to sample all pixels and then bin them, so for those cameras you would get 2x2 binned image with the same readout time. The readout time is mainly driven by how fast the camera can read the pixels (ADC conversion), process them and send them out via MIPI. The MIPI output is usually not the bottleneck, but the analog/digital processing is.
There is no overhead for VOXL2 to received 2x2 binned image, in fact the power and cpu usage will be slightly reduced (less data to handle). However, you will lose detail if you use 2x2 binned image.
For analyzing small april tags in the distance, well, the smaller they are, the lower the rolling shutter skew will be (locally)
so motion blur will probably dominate the distortion, so you would need to set the exposure low. The IMX412 has a good low-light sensitivity, so this allows you to reduce exposure and still get good image.
One last note, when using EIS, we typically use a full frame image (4056x3040, not 3840x2160), which is a 4:3 aspect ratio and provides a lot of margin for stabilization for 3840x2160 output. The readout time for the full frame image is proportionally larger (16ms instead of 12ms), however, locally the read out skew does not change (larger image = larger total rolling shutter skew). Also, EIS supports arbitrary output dimensions, so you can choose how much stabilization margin you want to have vs the stabilized FOV.
Out of curiosity, are you building a custom drone to support this effort or using one of our drones?
Alex
@Sarika-Sharma , thanks for the sending the picture of the TOF module.
In terms of connecting all 4 cameras to VOXL2 Mini, there is a lot of flexibility:
sensormodules) will need to be set up correctly, but that is not difficult (we have a helper tools to help with that).voxl-camera-server.conf.
Please make sure as you are plugging in all the interposers / flex cables (M0135 and M0084) and cameras that you line up pin1 locations (marked with a dot), so that you can make the connections properly.
Once you connect everything, you can send a picture of the board, flexes and cameras, i can double check the orientation and provide instructions how to actually set up the camera server for your config.
Alex