@m-costa ,
SSC can certainly achieve lower latency in terms of uncertainty of the time overhead for doing an SPI FIFO read from the IMU sensor. This is true because the SSC is typically doing a lot less than the CPU, but the SSC is still not a real time processor (compared to a "bare metal" microcontroller implementation). However, i believe the imu server on CPU is running at high priority and also we recently (6 months ago) made an update in the Linux kernel to increase the priority of the SPI interrupts. : here is the change that enables RT SPI message queue priority, if that is of interest: https://gitlab.com/voxl-public/system-image-build/meta-voxl2-bsp/-/commit/b5c3189536a80e8ea55846e7705dadbc5a597ff8
There is some logic to estimate the timestamps of each sample in voxl-imu-server with a fudge factor which as been verified against running VIO, which estimates the time offset between IMU and global shutter camera. Here is a link to the icm42688 driver and the note about time sync https://gitlab.com/voxl-public/voxl-sdk/services/qrb5165-imu-server/-/blob/master/server/src/icm42688.c#L68 .
There is also a timestamp filter which helps with removing timestampt jitter coming from the individual FIFO reads. Since the IMU sensor is running at a constant rate (in FIFO mode), there will not be any short term fluctuations in the sample read times on the sensor itself. The timestamp filter + fudge factor should do a good job quickly latching on to the time sync.
I have also recently done an overlay of camera frame with a rolling shutter artifact and overlaid IMU data at 4khz (well, this is actually a yaw estimate change since the start of the frame show as the vertical line running down the middle). The vertical line shows deflection when the IMU / camera start rotating. It is not perfect because in reality the rotation is 3D but only yaw is shown.
Please ignore the light "waves" here as they are from LED lighting and very low exposure. I reduced the exposure to 1 or 2 ms to avoid motion blur.
I do realize that the IMU sync fudge factor was tuned at 1khz (per note in the link above) and I changed the frequency to 4Khz, so I will definitely double check this, but it looks OK based on the image. It is good enough for initial testing, at least.
The plan for now is to use the SPI IMU connected to the regular QUP, accessed from CPU. Once the rolling shutter compensation pipeline is up and running, the sync can be tested and I am planning to have a way to run the pipeline offline using logged RAW images and IMU data to replicate the exact real-time processing pipeline.
rolling_shutter_effect_imu_sync.jpg
Alex