Minimizing voxl-camera-server CPU usage in SDK1.6
-
Hi Modal,
As we at Cleo update voxl-camera-server to SDK 1.6 (from SDK 1.0 with lots of backported changes), I've done some preliminary CPU profiling and have some questions on how to keep voxl-camera-server CPU usage down in SDK 1.6. Two things stand out to me:
-
Using the new
tracking_misp_normpipe for our tracking cameras uses ~25% more of a CPU core than using thetrackingpipe. 25% is total change across our 3 tracking cameras. So the usage goes from ~85% of a core to ~110% of a core (exact numbers change based on core allocation). To illustrate this you can runvoxl-inspect-cam top bottom back hires_small_color tof_depth, record the CPU usage of voxl-camera-server, and then runvoxl-inspect-cam top_misp_norm bottom_misp_norm back_misp_norm hires_small_color tof_depthand record the CPU usage again and compare. Is there a way to prevent this from happening, or can you explain why thetracking_misp_normpipes use more CPU? -
Adding additional clients to the camera pipe topics causes voxl-camera-server to use more CPU. To illustrate this I only run voxl-camera-server (no other services), then inspect our baseline pipes (e.g.
voxl-inspect-cam top_misp_norm bottom_misp_norm back_misp_norm hires_small_color tof_depth) and then in a new terminal(s) I open new clients again usingvoxl-inspect-cam. For example, when I open 4 new terminals (on top of the baseline terminal) and runvoxl-inspect-cam top_misp_norm bottom_misp_norm back_misp_normin each, then the voxl-camera-server CPU usage increases ~70% (from ~110% of a core, to ~180% of a core). This behavior on its own doesn't quite make sense to me, why would additional clients change the CPU load of the server, and by that much? The images should only be computed once, not per client? Furthermore, what's extra strange is that which tracking pipe is used also matters. If the baseline terminal is instead runningvoxl-inspect-cam top bottom back hires_small_color tof_depthand the 4 new client terminals are runningvoxl-inspect-cam top bottom back, then the CPU usage increase only by ~7% (from ~85% to ~92%). The different behavior between type of tracking pipe doesn't make sense to me, could you explain it?
Overall, at Cleo we are looking to use the
tracking_misp_normpipes going forward, and be able to have multiple clients consuming those pipes with having to worry about increasing CPU usage of the voxl-camera-server. If you could comment on whether these asks are possible, or explain why not (either at this time, or never), that would help us a great deal in our process of taking advantage of Modal's great work in robotic perception! -
-
Hi @Rowan-Dempster,
We have been looking at some optimizations to help reduce the overall cpu usage of the camera server (not yet in the SDK). Let me test your exact use case and see what can be done.
Just FYI, we recently added support for sharing ION buffers directly from camera server, which means the camera clients get the images using zero-copy approach. this allows to save the cpu cycles wasted on sending the image bytes over the pipe, especially when there are multiple clients.
If you would like to learn more how to use the ION buffers, I can post some examples soon. One the client side, the API for receiving and ION buffer vs regular buffer is almost the same. One thing that will be different is that the shared ION buffer has to be released by all clients before it can be re-used by the camera server (which makes sense).
Even without the ION buffer sharing there is room to reduce the cpu usage, so I will follow up after testing this a bit. Regarding your question whether sending the image to multiple clients should not cause significant extra cpu usage -- yes you are correct, ideally it should not. However, the reason why it is happening here is related to how we set up ION buffer cache policy and currently when the CPU accesses the buffers for the
misp_normimages (coming from the gpu), the cpu reads are not cached and the read access is expensive. Reading the same buffer multiple times results in repeated CPU-RAM access (for the data that would normally be already fully cached after the first send to the client pipe). However, in some other cases (when the buffer is not used by the cpu, but is shared as ION buffer and client sends the buffer directly to GPU), this approach results in even lower CPU usage. So i think we need to resolve the buffer cache policy based on the use case.. More details will come soon..Alex
-
@Alex-Kushleyev Hi Alex, I appreciate you looking into our use case for us (as always!). Please let me know if I can help by providing more details regarding the structure of our clients that are consuming the camera pipes. If there is a detail Cleo can't share publicly on the forum I can always email you as well or hop on a quick call to elaborate.
Understood about the
misppipes, that expensive read access would explain both point #1 and the strange part of point #2 in my original post.We at Cleo will be monitoring this closely, since CPU usage regressions is pretty much the only gating item for us upgrading our robotic perception stack to SDK1.6. The
misp_normpipes are a great benefit to that perception stack and we'd of course love to take advantage of them as soon as possible. Thus we are definitely open to trying out zero-copy client approaches for keeping CPU usage down, or any other optimizations you could share examples of for us to try out on our use case. -
@Alex-Kushleyev Hi Alex, just following up on any update regarding the shared ION buffers or other methods to work around the CPU hit taken by having many clients to
mispbased image pipes. Happy to try out any methods/suggestions with our specific clients!Rowan
-
Hi @Rowan-Dempster ,
I started a new branch where I will be working on some performance optimizations in the camera server.
https://gitlab.com/voxl-public/voxl-sdk/services/voxl-camera-server/-/tree/perf-optimizations
In my initial testing, setting cpu to perf and when running one or two instances of the following:
voxl-inspect-cam tracking_front_misp_norm tracking_down_misp_normi was seeing:
1 instance (2 inspected streams) : 42% CPU (of one core) 2 instances (4 inspected streams): 58% CPU (of one core)with the changes i just committed, i am seeing:
1 instance: 31% cpu 2 instances : 36% cpuIf you would like you can test camera server from this branch and see if you can reproduce the results.
notes:
- the internal buffers were switched from uncached to cached and proper buffer management was added to ensure that data written by GPU is properly accessed by CPU
- with these changes, if you use the
_encodedstream from the tracking camera, it will work, but in dmesg you will see messages related toqbuf cache ops failed-- this is still under investigation and will be fixed soon.
Meanwhile, I will work on an a simple example that shows the usage of ION buffers, I will try to share it a bit later today.
Alex
-
Hi @Rowan-Dempster ,
Please take a look at this example (you can build and run it too) : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-tools/-/blob/add-new-image-tools/tools/voxl-image-repub.cpp
This app can accept a regular image (RAW8 or YUV) and either re-publish it unchanged or crop and publish the result. Sometimes this is useful for quickly cropping an image that is fed into a different application that expects a smaller image or different aspect ratio.
The app shows how to subscribe and handle ion buffer streams.
Usage:
voxl2:/$ voxl-image-repub ERROR: Pipe name not specified Re-publish cropped camera frames (RAW8 or YUV) Options are: -x, --crop-offset-x crop offset in horizontal dimension -y, --crop-offset-y crop offset in vertical dimension -w, --crop-size-x crop size in horizontal dimension (width) -h, --crop-size-y crop size in vertical dimension (height) -o, --output-name output pipe name -u, --usage print this help message The cropped image will be centered if the crop offsets are not provided. typical usage: /# voxl-image-repub tracking --crop-size-x 256 --crop-size-y 256 /# voxl-image-repub tracking --crop-size-x 256 --crop-size-y 256 --crop-offset-x 128 --crop-offset-y 128example re-publishing ion buffer image as regular image (which you can view in
voxl-portal) :voxl-image-repub tracking_front_misp_norm_ion -o test(you can see which ion pipes are available by running
voxl-list-pipes | grep _ion)Please note that without the previous fix that i posted above, the client process that receives and uncached ION buffer will incur extra CPU load while accessing this buffer. For example, the same
voxl-image-repubclient uses 1.7% cpu while republishing the normalized image (cached ion buffer), while using 7.3% cpu republishing an image from an uncached ION buffer. (cpu usage % using one of the smaller cores).Please try and let me know if you have any questions.
I know this cached / uncached buffering may be a bit confusing, but i will document this a bit more to help explain it a little better.
Alex
-
@Alex-Kushleyev Hi Alex, just acknowledging your messages, thanks for looking into this and providing us with a path forward. I will have time over the next 2-3 days to test the perf-optimizations branch and familiarize myself the ION based clients. I will update you then.
-
@Rowan-Dempster , sounds good, let me know how the testing goes.
Alex
-
@Alex-Kushleyev Actually had some time today to do the profiling and explore the ION pipes.
I see the same improvements that you quoted:
On 2.2.17
Normal pipes: 45%
MISP pipes: 66%
MISP pipes + 4x tracking pipes: 160%On perf improvements
Normal pipes: 45%
MISP pipes: 52%
MISP pipes + 4x tracking pipes: 71%On perf improvements + ION pipes
MISP pipes: 48%
MISP pipes + 4x tracking pipes: 52%I started looking into using the ION pipe in the QVIO server code but saw this in the voxl-mpa-tools build script:
qrb5165) check_docker "4.3" mkdir -p build32 cd build32 cmake -DCMAKE_TOOLCHAIN_FILE=${TOOLCHAIN_QRB5165_1_32} \ -DBUILD_TOOLS=OFF \ -DEN_ION_BUF=OFF ../ make -j$(nproc) cd ../ mkdir -p build64 cd build64 cmake -DCMAKE_TOOLCHAIN_FILE=${TOOLCHAIN_QRB5165_1_64} \ -DBUILD_TOOLS=ON \ -DEN_ION_BUF=ON ../ make -j$(nproc) cd ../ ;; qrb5165-2) check_docker "4.3" mkdir -p build cd build cmake -DCMAKE_TOOLCHAIN_FILE=${TOOLCHAIN_QRB5165_2_64} \ -DBUILD_TOOLS=ON \ -DEN_ION_BUF=OFF ../ make -j$(nproc) cd ../ ;;Since we're using the qrb5165 (not the qrb5165-2) we have to use the 32bit toolchain to build QVIO (right?):
qrb5165) check_docker "4.4" mkdir -p build cd build cmake -DCMAKE_TOOLCHAIN_FILE=${TOOLCHAIN_QRB5165_1_32} ../ make -j$(nproc) cd ../ ;;Does this mean that on the qrb5165 (using the 32 bit toolchain), the QVIO server cannot be a client to the ION pipes?
Thank you,
Rowan -
Yes, QVIO only runs as a 32-bit app due to the nature of the library from Qualcomm.
I tried to build the
voxl-image-repubapplication for 32 bit and got the following error:/usr/bin/arm-linux-gnueabi-ld: CMakeFiles/voxl-image-repub.dir/voxl-image-repub.cpp.o: in function `main': voxl-image-repub.cpp:(.text.startup+0x228): undefined reference to `pipe_client_set_ion_buf_helper_cb'So it seems like the 32-bit version of libmodal-pipe does not support sending ION buffers.
I just checked with the team - even though we have not tested the ION buffer sharing in 32-bit environment, it should work. You could try to build
libmodal-pipelibrary and enable ION support : https://gitlab.com/voxl-public/voxl-sdk/core-libs/libmodal-pipe/-/blob/master/build.sh?ref_type=heads#L76 .Then you would need to install that new library into your docker container where you are building your app, as well as deploy to VOXL2.
BTW in order to build the tools in
voxl-mpa-toolsi needed to disable-Werrorand comment out a few targets likevoxl-convert-imageandvoxl-inspect-cam-asciidue to lack of 32-bit version of opencv.So.. if you really wanted the QVIO app to use the shared ION buffers, you would have to go that route..
Alex
-
@Alex-Kushleyev Hi Alex I'm running into compilation errors when building libmodal-pipe after turning the EN_ION_BUF flag ON (builds fine with the flag OFF)
Found voxl-cross version: 4.4 -- --------------------------------------------------------- -- Using voxl-cross 32-bit toolchain for QRB5165 -- C Compiler : /usr/bin/arm-linux-gnueabi-gcc-7 -- C++ Compiler : /usr/bin/arm-linux-gnueabi-g++-7 -- Sysroot : /opt/sysroots/qrb5165_1 -- C flags : -isystem=/usr/arm-linux-gnueabi/include -isystem=/usr/include -idirafter /usr/include -fno-stack-protector -march=armv7-a -mfloat-abi=softfp -mfpu=vfpv4 -- CXX flags : -isystem=/usr/arm-linux-gnueabi/include -isystem=/usr/include -idirafter /usr/include -fno-stack-protector -march=armv7-a -mfloat-abi=softfp -mfpu=vfpv4 -- EXE Link Flags : -L/opt/sysroots/qrb5165_1/usr/lib32 -L/opt/sysroots/qrb5165_1/lib -L/opt/sysroots/qrb5165_1/usr/lib -L/opt/sysroots/qrb5165_1/usr/arm-linux-gnueabi -L/opt/sysroots/qrb5165_1/usr/lib/gcc-cross/arm-linux-gnueabi/7 -L/usr/lib -- SO Link Flags : -L/opt/sysroots/qrb5165_1/usr/lib32 -L/opt/sysroots/qrb5165_1/lib -L/opt/sysroots/qrb5165_1/usr/lib -L/opt/sysroots/qrb5165_1/usr/arm-linux-gnueabi -L/opt/sysroots/qrb5165_1/usr/lib/gcc-cross/arm-linux-gnueabi/7 /opt/sysroots/qrb5165_1/usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc.a /opt/sysroots/qrb5165_1/usr/lib/gcc-cross/arm-linux-gnueabi/7/libgcc_eh.a /opt/sysroots/qrb5165_1/usr/lib/gcc-cross/arm-linux-gnueabi/7/libssp_nonshared.a /opt/sysroots/qrb5165_1/usr/arm-linux-gnueabi/lib/libc_nonshared.a -L/usr/lib -- The C compiler identification is GNU 7.3.0 -- The CXX compiler identification is GNU 7.3.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/arm-linux-gnueabi-gcc-7 - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/arm-linux-gnueabi-g++-7 - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- INFO: building with ion buf support -- INFO: building with mavlink support INFO: Skipping Building Python Bindings -- INFO: Skipping examples and tools -- Configuring done (1.8s) -- Generating done (0.0s) -- Build files have been written to: /home/root/build32 [ 10%] Building C object library/CMakeFiles/modal_pipe.dir/src/start_stop.c.o [ 30%] Building C object library/CMakeFiles/modal_pipe.dir/src/client.c.o [ 30%] Building CXX object library/CMakeFiles/modal_pipe.dir/src/buffers.cpp.o [ 50%] Building C object library/CMakeFiles/modal_pipe.dir/src/interfaces.c.o [ 50%] Building C object library/CMakeFiles/modal_pipe.dir/src/sink.c.o [ 60%] Building C object library/CMakeFiles/modal_pipe.dir/src/common.c.o [ 70%] Building C object library/CMakeFiles/modal_pipe.dir/src/misc.c.o [ 80%] Building C object library/CMakeFiles/modal_pipe.dir/src/server.c.o [ 90%] Building CXX object library/CMakeFiles/modal_pipe.dir/src/buffers/gbm.cpp.o /home/root/library/src/server.c: In function 'pipe_server_write_ion_buffer': /home/root/library/src/server.c:1757:102: error: format '%ld' expects argument of type 'long int', but argument 5 has type 'int64_t {aka long long int}' [-Werror=format=] printf("server preparing to send ion buffer id %d (fd %d) to clients, n_clients: %d, time: %ld\n", ~~^ %lld ion_buf->buffer_id, fd, c[ch].n_clients, _time_monotonic_ns()); ~~~~~~~~~~~~~~~~~~~~ /home/root/library/src/client.c: In function '_stop_helper_and_remove_pipe': /home/root/library/src/client.c:1353:56: error: format '%ld' expects argument of type 'long int', but argument 3 has type 'int64_t {aka long long int}' [-Werror=format=] if(en_debug) printf("ch: %d, shutdown socket %ld ns\n", ch, _time_monotonic_ns()); ~~^ ~~~~~~~~~~~~~~~~~~~~ %lld /home/root/library/src/client.c: In function 'pipe_client_close': /home/root/library/src/client.c:1475:43: error: format '%ld' expects argument of type 'long int', but argument 3 has type 'int64_t {aka long long int}' [-Werror=format=] printf("ch: %d, shutdown socket %ld ms\n", ch, _time_monotonic_ns()); ~~^ ~~~~~~~~~~~~~~~~~~~~ %lld cc1: all warnings being treated as errors make[2]: *** [library/CMakeFiles/modal_pipe.dir/build.make:93: library/CMakeFiles/modal_pipe.dir/src/client.c.o] Error 1 make[2]: *** Waiting for unfinished jobs.... cc1: all warnings being treated as errors make[2]: *** [library/CMakeFiles/modal_pipe.dir/build.make:149: library/CMakeFiles/modal_pipe.dir/src/server.c.o] Error 1 make[1]: *** [CMakeFiles/Makefile2:106: library/CMakeFiles/modal_pipe.dir/all] Error 2 make: *** [Makefile:136: all] Error 2Seems like I'm going down an uncharted path here. I'm okay with spending my time charting it out since getting QVIO to work with ION pipes will save us critical CPU util. Does ModalAI have plans to switch the official SDK QVIO server release (for qrb5165) to using ION pipes (I saw ModalAI switched to using the MISP pipes for QVIO a few months ago, taking a CPU hit)?
-
@Rowan-Dempster, the errors above seem like print formatting issues, but i understand that there could be some others. If you want to try to resolve those and see if there are any more significant errors, I can help.
We do not plan to update QVIO to use ion pipes, mainly because QVIO only supports a single camera input, so the savings from going to ION buffers for a single camera is going to be very small. The majority of savings came from the buffer caching policy, which I pointed out in the previous email. Another reason is that we are now focusing on Open Vins, which does support multiple cameras.
We do plan to fix the buffer caching policy so to remove this extra cpu usage for the appropriate MISP outputs. The work is ongoing in the branch i mentioned above.
Are you running multiple instances of QVIO, one for each camera?
Alex
-
@Alex-Kushleyev Sounds good, I proceeded with resolving the basic typing issues related to 32 bit monotonic time. Next issue is a linker error that looks a bit more tricky:
-- Build files have been written to: /home/root/build32 [ 10%] Building CXX object library/CMakeFiles/modal_pipe.dir/src/buffers.cpp.o [ 40%] Building C object library/CMakeFiles/modal_pipe.dir/src/server.c.o [ 40%] Building C object library/CMakeFiles/modal_pipe.dir/src/common.c.o [ 40%] Building C object library/CMakeFiles/modal_pipe.dir/src/misc.c.o [ 50%] Building CXX object library/CMakeFiles/modal_pipe.dir/src/buffers/gbm.cpp.o [ 60%] Building C object library/CMakeFiles/modal_pipe.dir/src/interfaces.c.o [ 70%] Building C object library/CMakeFiles/modal_pipe.dir/src/client.c.o [ 80%] Building C object library/CMakeFiles/modal_pipe.dir/src/sink.c.o [ 90%] Building C object library/CMakeFiles/modal_pipe.dir/src/start_stop.c.o [100%] Linking CXX shared library libmodal_pipe.so /opt/sysroots/qrb5165_1/usr/lib/libgbm.so: file not recognized: file format not recognized collect2: error: ld returned 1 exit status make[2]: *** [library/CMakeFiles/modal_pipe.dir/build.make:228: library/libmodal_pipe.so] Error 1 make[1]: *** [CMakeFiles/Makefile2:106: library/CMakeFiles/modal_pipe.dir/all] Error 2 make: *** [Makefile:136: all] Error 2 -
@Rowan-Dempster , yeah that is going to be a problem.. I don't think we have a 32 bit version of
libgbm.sofor VOXL2. This library is used for allocating the ION buffers.The next thing to try would be to remove the buffer allocation part from the 32-bit build. Actually receiving and reading the buffer does not involve libgbm. the client just gets a FD, which needs to be mmap'ed and used. (this would remove ability to allocate new ION buffers, which you actually don't need on the client side).
I just commented out the following from the library CMakeLists:
#list(APPEND LIBS_TO_LINK gbm) #list(APPEND all_src_files src/buffers/gbm.cpp)and here are the errors:
/usr/bin/aarch64-linux-gnu-ld: CMakeFiles/modal_pipe.dir/src/buffers.cpp.o: in function `mpa_ion_buf_pool_alloc_bufs': buffers.cpp:(.text+0x10c): undefined reference to `allocate_one_buffer(mpa_ion_buf_t*, int, int, unsigned int, unsigned int)' /usr/bin/aarch64-linux-gnu-ld: buffers.cpp:(.text+0x184): undefined reference to `init_buffer_allocator()' /usr/bin/aarch64-linux-gnu-ld: CMakeFiles/modal_pipe.dir/src/buffers.cpp.o: in function `mpa_ion_buf_pool_delete_bufs': buffers.cpp:(.text+0x24c): undefined reference to `delete_one_buffer(mpa_ion_buf_t*)' /usr/bin/aarch64-linux-gnu-ld: buffers.cpp:(.text+0x2a0): undefined reference to `shutdown_buffer_allocator()'You could try replacing those functions with a fatal print statement "not implemented". Maybe that would work?
Alex