Minimizing voxl-camera-server CPU usage in SDK1.6
-
Hi Modal,
As we at Cleo update voxl-camera-server to SDK 1.6 (from SDK 1.0 with lots of backported changes), I've done some preliminary CPU profiling and have some questions on how to keep voxl-camera-server CPU usage down in SDK 1.6. Two things stand out to me:
-
Using the new
tracking_misp_normpipe for our tracking cameras uses ~25% more of a CPU core than using thetrackingpipe. 25% is total change across our 3 tracking cameras. So the usage goes from ~85% of a core to ~110% of a core (exact numbers change based on core allocation). To illustrate this you can runvoxl-inspect-cam top bottom back hires_small_color tof_depth, record the CPU usage of voxl-camera-server, and then runvoxl-inspect-cam top_misp_norm bottom_misp_norm back_misp_norm hires_small_color tof_depthand record the CPU usage again and compare. Is there a way to prevent this from happening, or can you explain why thetracking_misp_normpipes use more CPU? -
Adding additional clients to the camera pipe topics causes voxl-camera-server to use more CPU. To illustrate this I only run voxl-camera-server (no other services), then inspect our baseline pipes (e.g.
voxl-inspect-cam top_misp_norm bottom_misp_norm back_misp_norm hires_small_color tof_depth) and then in a new terminal(s) I open new clients again usingvoxl-inspect-cam. For example, when I open 4 new terminals (on top of the baseline terminal) and runvoxl-inspect-cam top_misp_norm bottom_misp_norm back_misp_normin each, then the voxl-camera-server CPU usage increases ~70% (from ~110% of a core, to ~180% of a core). This behavior on its own doesn't quite make sense to me, why would additional clients change the CPU load of the server, and by that much? The images should only be computed once, not per client? Furthermore, what's extra strange is that which tracking pipe is used also matters. If the baseline terminal is instead runningvoxl-inspect-cam top bottom back hires_small_color tof_depthand the 4 new client terminals are runningvoxl-inspect-cam top bottom back, then the CPU usage increase only by ~7% (from ~85% to ~92%). The different behavior between type of tracking pipe doesn't make sense to me, could you explain it?
Overall, at Cleo we are looking to use the
tracking_misp_normpipes going forward, and be able to have multiple clients consuming those pipes with having to worry about increasing CPU usage of the voxl-camera-server. If you could comment on whether these asks are possible, or explain why not (either at this time, or never), that would help us a great deal in our process of taking advantage of Modal's great work in robotic perception! -
-
Hi @Rowan-Dempster,
We have been looking at some optimizations to help reduce the overall cpu usage of the camera server (not yet in the SDK). Let me test your exact use case and see what can be done.
Just FYI, we recently added support for sharing ION buffers directly from camera server, which means the camera clients get the images using zero-copy approach. this allows to save the cpu cycles wasted on sending the image bytes over the pipe, especially when there are multiple clients.
If you would like to learn more how to use the ION buffers, I can post some examples soon. One the client side, the API for receiving and ION buffer vs regular buffer is almost the same. One thing that will be different is that the shared ION buffer has to be released by all clients before it can be re-used by the camera server (which makes sense).
Even without the ION buffer sharing there is room to reduce the cpu usage, so I will follow up after testing this a bit. Regarding your question whether sending the image to multiple clients should not cause significant extra cpu usage -- yes you are correct, ideally it should not. However, the reason why it is happening here is related to how we set up ION buffer cache policy and currently when the CPU accesses the buffers for the
misp_normimages (coming from the gpu), the cpu reads are not cached and the read access is expensive. Reading the same buffer multiple times results in repeated CPU-RAM access (for the data that would normally be already fully cached after the first send to the client pipe). However, in some other cases (when the buffer is not used by the cpu, but is shared as ION buffer and client sends the buffer directly to GPU), this approach results in even lower CPU usage. So i think we need to resolve the buffer cache policy based on the use case.. More details will come soon..Alex
-
@Alex-Kushleyev Hi Alex, I appreciate you looking into our use case for us (as always!). Please let me know if I can help by providing more details regarding the structure of our clients that are consuming the camera pipes. If there is a detail Cleo can't share publicly on the forum I can always email you as well or hop on a quick call to elaborate.
Understood about the
misppipes, that expensive read access would explain both point #1 and the strange part of point #2 in my original post.We at Cleo will be monitoring this closely, since CPU usage regressions is pretty much the only gating item for us upgrading our robotic perception stack to SDK1.6. The
misp_normpipes are a great benefit to that perception stack and we'd of course love to take advantage of them as soon as possible. Thus we are definitely open to trying out zero-copy client approaches for keeping CPU usage down, or any other optimizations you could share examples of for us to try out on our use case.