ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Alex Kushleyev
    • Profile
    • Following 0
    • Followers 11
    • Topics 0
    • Posts 1883
    • Best 102
    • Controversial 1
    • Groups 1

    Alex Kushleyev

    @Alex Kushleyev

    ModalAI Team

    104
    Reputation
    274
    Profile views
    1883
    Posts
    11
    Followers
    0
    Following
    Joined Last Online

    Alex Kushleyev Unfollow Follow
    ModalAI Team

    Best posts made by Alex Kushleyev

    • RE: ToF v2 keeps crashing because of high temperature

      @dlee ,

      Yes the new TOF sensor (IRS2975C) is more powerful that the previous generation. What I mean by that is that it can emit more IR power but also heats up more. Emitting more power allows the sensor detect objects at larger distances or objects that are not as reflective.

      In current operating mode, the auto exposure control is enabled inside the sensor itself, which modulates the emitted IR power based on the returns that the sensor is getting. That is to say, the power draw will vary depending on what is in the view of the sensor. If there are obstacles nearby, the output power should be low, otherwise it can be high. At full power, the module can consume close to 0.8-0.9W

      So the first solution, if design allows, is to add a heat spreader to dissipate the heat, which you already started experimenting with. The sensor has a large exposed copper pad in the back for heat sinking purposes for this exact reason. Just be careful not to short this pad to anything, use non-conducting (but heat transfering) adhesive pad between the sensor and heat spreader.

      In terms of a software solution to the issue, we can query the temperature of the emitter. We can also control the maximum emitted power used by the auto exposure algorithm. That is to say, still leave the auto exposure running in the sensor, but limit the maximum power that it is allowed to use.

      We are planning to add some software protection that limits the maximum output power as a function of the emitter temperature. This will require some implementation and testing.

      Meanwhile, please consider using a heat spreader, which will be the best solution if you want to make use of the full sensor's operating range and not have our software limit the output power in order to prevent overheating.

      posted in Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Propeller Coefficients for Starling V2

      Hello @Kashish-Garg-0

      we have a curve that is "motor voltage vs rpm", meaning that for a desired RPM, it tells the ESC what average motor voltage should be applied. The average motor voltage is defined as battery_voltage * motor_pmw_duty_cycle. The battery voltage in this curve is in millivolts. Since you are typically controlling the desired RPM, as a user you do not need to worry about what "throttle" or voltage to apply - the ESC does this automatically in order to achieve the desired RPM. this calibration curve is used as a feed-forward term in the RPM controller. The ESC does support an "open loop" type of control where you specify the power from 0 to 100%, which is similar to a standard ESC, but PX4 does not use that ESC control mode.

      By the way, you can test the ESC directly (not using PX4) using our voxl-esc tools (https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-esc/-/tree/master/voxl-esc-tools) which works directly on VOXL2 or a standalone linux PC (or mac). voxl-esc-spin.py has a --power argument where you specify the power from 0 to 100, which translates directly to the average duty cycle applied to the motor.

      Here is the calibration for the Starling V2 motor / propeller that we use:
      https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-esc/-/blob/master/voxl-esc-params/mavic_mini_2/mavic_mini_2.xml?ref_type=heads#L63

      Also, you can take a look at this post to see how to interpret those parameters a0, a1, a2 : https://forum.modalai.com/topic/2522/esc-calibration/2

      We also have some dyno tests for this motor / propeller : https://gitlab.com/voxl-public/flight-core-px4/dyno_data/-/blob/master/data/mavic_mini2_timing_test/mavic_mini2_modal_esc_pusher_7.4V_timing0.csv . We are not sure how accurate that is, but it can be used as a starting point. @James-Strawson can you please confirm that is the correct dyno data for the Starling V2 motors?

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Sending Recorded Video Though Camera Server on VOXL2

      @reber34 , perhaps this approach can work for you:

      • record a video encoded at high bit rate (using voxl-camera-server and voxl-record-video . Please note that the output of voxl-record-video will not be in a standard container (such as mp4, etc), but you can fix it with ffpeg : ffmpeg -r 30 -i voxl-record-video.h264 -codec copy videofile.mp4
      • re-encode the video offline with desired codecs / bit rates / resolutions
      • install gst-rtsp-launch which uses gstreamer to set up an RTSP stream https://github.com/sfalexrog/gst-rtsp-launch/
        • you will first need to figure out what gstreamer pipeline to use on voxl2 that will load your video and parse the h264/h265 frames (can use null sink for testing) and then use that pipeline with gst-rtsp-launch which will take the encoded frames and serve them over rtsp stream.
      • gstreamer may be more flexible for tuning the encoding parameters of h264/h265 (compared to voxl-camera-server) and you can also use it in real time later (using voxl-streamer, which uses gstreamer under the hood)

      Another alternative is to use voxl-record-raw-image to save raw YUVs coming from voxl-camera-server and then use voxl-replay and voxl-streamer - the latter will accept YUVs from the MPA pipe and encode them using the bit rate that you want. Note that depending on the image resolution, YUV images will take a lot more space than encoded video, but maybe that is also OK since VOXL2 has lots of storage.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Recording tracking camera video compressed and coloured

      @vjuliani , the tracking camera ( i assume ov7251 or AR0144) is monochrome, so you can not make the image colored. You can install the latest version of the voxl-streamer from dev (nightly builds) http://voxl-packages.modalai.com/dists/qrb5165/dev/binary-arm64/ and it should allow you to encode h265.

      Alex

      posted in Qualcomm Flight RB5 5G Drone
      Alex KushleyevA
      Alex Kushleyev
    • RE: GPIO missing in /sys/class/gpio

      @psafi , before calling voxl-gpio read, you need to enable the pin by setting it to input or output. Please check voxl-gpio -h. (hint: use voxl-gpio -m / voxl-gpio mode)

      Alex

      posted in VOXL 2 Mini
      Alex KushleyevA
      Alex Kushleyev
    • RE: OV7251 RAW10 format

      Hello @Gicu-Panaghiu,

      I am going to assume you are using VOXL1, since you did not specify..

      We do have RAW8 and RAW10 support for OV7251. The selection of the format has to be done in several places.

      First, you have to select the correct camera driver, specifically..

      ls /usr/lib/libmmcamera_ov7251*.so
      /usr/lib/libmmcamera_ov7251.so
      /usr/lib/libmmcamera_ov7251_8bit.so
      /usr/lib/libmmcamera_ov7251_hflip_8bit.so
      /usr/lib/libmmcamera_ov7251_rot180_8bit.so
      /usr/lib/libmmcamera_ov7251_vflip_8bit.so
      

      there are 5 options and one of them is _8bit.so which means it will natively ouptput 8bit data (all others output 10 bit data).

      the driver name, such as ov7251_8bit has to be the sensor name <SensorName>ov7251_8bit</SensorName> in /system/etc/camera/camera_config.xml.

      You can check camera_config.xml for what sensor library is used for your OV7251.

      When you run voxl-configure-cameras script, it will actually copy one of the default camera_config.xml that are set up for a particular use case, and I believe it will indeed select the 8bit one - this was done to save cpu cycles needed to convert 10bit to 8bit, since majority of the time only 8bit pixels are used.

      Now, you mentioned that HAL_PIXEL_FORMAT_RAW10 is passed to the stream config and unfortunately this does not have any effect on what the driver outputs. If the low level driver (e.g. libmmcamera_ov7251_8bit.so) is set up to output RAW8, it will output RAW8 if you request either HAL_PIXEL_FORMAT_RAW8 or HAL_PIXEL_FORMAT_RAW10.

      So if you update the camera_config.xml to the 10bit driver and just keep the HAL_PIXEL_FORMAT_RAW10 in the stream config (then sync and reboot), you should be getting a 10 bit RAW image from the camera. But since the camera server is currently expecting 8 bit image, if you just interpret the image as 8 bit, it will appear garbled, so you will need to handle the 10 bit image (decide what you want to do with it) in the camera server.

      posted in Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Can't run the voxl-emulator docker image

      It looks like your machine does not have QEMU support or is missing ARM support on your host machine. Please try instructions here : https://www.stereolabs.com/docs/docker/building-arm-container-on-x86/ (which also show you how to run ARM64 docker images on x86)

      posted in VOXL
      Alex KushleyevA
      Alex Kushleyev
    • RE: Cannot change TOF framerate

      The ipk is available here now : http://voxl-packages.modalai.com/stable/voxl-hal3-tof-cam-ros_0.0.5.ipk - you should be able to use the launch file to choose between two modes (5=short range and 9=long range) and fps, which are listed in the launch file.

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Minimal example of using camera_cb

      Hi

      If you are looking for more low level functionality and a bit easier to experiment with, you may want to check out https://gitlab.com/voxl-public/utilities/voxl-rtsp/-/tree/dev . This tool is good for testing or running simple scenarios. This tool accepts some commands (like exposure control) via command line, so you could add a simple feature to save an image upon a command line input. Please see README to see how the exposure control input works and code for how it's implemented.

      Please make sure that voxl-camera-server is disabled when you run voxl-rtsp.

      If you still would like to go through the voxl-camera-server approach, we may need some more input from respective devs :).

      I hope this helps..

      Alex

      posted in VOXL m500 Reference Drone
      Alex KushleyevA
      Alex Kushleyev
    • RE: Rotate imx412 stream

      @TomP , oh yeah, my bad.. you found the right one 🙂

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev

    Latest posts made by Alex Kushleyev

    • RE: Minimizing voxl-camera-server CPU usage in SDK1.6

      @Rowan-Dempster ,

      I think you are close.. sure, if you want to share the diff or make a fork of the repos, I will try it out.

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Minimizing voxl-camera-server CPU usage in SDK1.6

      @Rowan-Dempster , yeah that is going to be a problem.. I don't think we have a 32 bit version of libgbm.so for VOXL2. This library is used for allocating the ION buffers.

      The next thing to try would be to remove the buffer allocation part from the 32-bit build. Actually receiving and reading the buffer does not involve libgbm. the client just gets a FD, which needs to be mmap'ed and used. (this would remove ability to allocate new ION buffers, which you actually don't need on the client side).

      I just commented out the following from the library CMakeLists:

      #list(APPEND LIBS_TO_LINK gbm)
      #list(APPEND all_src_files src/buffers/gbm.cpp)
      

      and here are the errors:

      /usr/bin/aarch64-linux-gnu-ld: CMakeFiles/modal_pipe.dir/src/buffers.cpp.o: in function `mpa_ion_buf_pool_alloc_bufs':
      buffers.cpp:(.text+0x10c): undefined reference to `allocate_one_buffer(mpa_ion_buf_t*, int, int, unsigned int, unsigned int)'
      /usr/bin/aarch64-linux-gnu-ld: buffers.cpp:(.text+0x184): undefined reference to `init_buffer_allocator()'
      /usr/bin/aarch64-linux-gnu-ld: CMakeFiles/modal_pipe.dir/src/buffers.cpp.o: in function `mpa_ion_buf_pool_delete_bufs':
      buffers.cpp:(.text+0x24c): undefined reference to `delete_one_buffer(mpa_ion_buf_t*)'
      /usr/bin/aarch64-linux-gnu-ld: buffers.cpp:(.text+0x2a0): undefined reference to `shutdown_buffer_allocator()'
      

      You could try replacing those functions with a fatal print statement "not implemented". Maybe that would work?

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Robust way of setting static IP

      @jonathankampia ,

      Here is something i tried, you can try as well. You are right, there is a background service that may be taking over, I think i found how to disable it:

      systemctl disable QCMAP_ConnectionManagerd
      systemctl disable qti_pppd
      systemctl disable qtid
      
      rm /lib/systemd/system/multi-user.target.wants/QCMAP_ConnectionManagerd.service
      rm /lib/systemd/system/multi-user.target.wants/qti_pppd.service
      rm /lib/systemd/system/multi-user.target.wants/qtid.service
      
      #edit: instead of disabling the above 3 services and removing the entries from `multi-user.target.wants`, it seems you can do the following:
      systemctl mask QCMAP_ConnectionManagerd
      systemctl mask qti_pppd
      systemctl mask qtid
      
      # you may want to disable dhcpcd as well, but i dont think that is strictly necessary:
      systemctl disable dhcpcd
      

      Now, set up static connection:

      #create a new network interface file
      vi /etc/systemd/network/10-eth0.network
      
      [Match]
      Name=eth0
      
      [Network]
      Address=192.168.xx.xx/24
      Gateway=192.168.xx.1
      DNS=8.8.8.8 1.1.1.1
      

      enable networkd

      systemctl enable systemd-networkd
      

      Then reboot voxl2...

      I think if dhcpcd is enabled, it may first take over the interface, but then networkd takes it back.. For example, here is the log from networkd when dhcpcd is enabled:

      ...
      Dec 10 06:01:00 m0054 systemd-networkd[1126]: dummy0: Gained carrier
      Dec 10 06:01:00 m0054 systemd-networkd[1126]: dummy0: Gained IPv6LL
      Dec 10 06:01:11 m0054 systemd-networkd[1126]: eth0: Gained carrier
      Dec 10 06:02:13 m0054 systemd-networkd[1126]: eth0: Gained IPv6LL
      Dec 10 06:02:13 m0054 systemd-networkd[1126]: eth0: Configured
      Dec 10 06:02:13 m0054 systemd-networkd[1126]: docker0: Link UP
      Dec 10 06:02:21 m0054 systemd-networkd[1126]: eth0: Lost carrier
      Dec 10 06:02:36 m0054 systemd-networkd[1126]: eth0: Gained carrier
      Dec 10 06:02:38 m0054 systemd-networkd[1126]: eth0: Gained IPv6LL
      Dec 10 06:02:38 m0054 systemd-networkd[1126]: eth0: Configured
      

      and the log with dhcpcd disabled:

      ...
      Dec 10 06:02:13 m0054 systemd-networkd[1126]: bond0: Link is not managed by us
      Dec 10 06:02:13 m0054 systemd-networkd[1126]: sit0: Link is not managed by us
      Dec 10 06:02:14 m0054 systemd-networkd[1126]: eth0: Link UP
      Dec 10 06:02:38 m0054 systemd-networkd[1126]: eth0: Gained carrier
      Dec 10 06:09:56 m0054 systemd-networkd[1126]: eth0: Gained IPv6LL
      Dec 10 06:09:56 m0054 systemd-networkd[1126]: eth0: Configured
      Dec 10 06:09:56 m0054 systemd-networkd[1126]: docker0: Link UP
      

      Can you try and see if that solves your issue?

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Minimizing voxl-camera-server CPU usage in SDK1.6

      @Rowan-Dempster, the errors above seem like print formatting issues, but i understand that there could be some others. If you want to try to resolve those and see if there are any more significant errors, I can help.

      We do not plan to update QVIO to use ion pipes, mainly because QVIO only supports a single camera input, so the savings from going to ION buffers for a single camera is going to be very small. The majority of savings came from the buffer caching policy, which I pointed out in the previous email. Another reason is that we are now focusing on Open Vins, which does support multiple cameras.

      We do plan to fix the buffer caching policy so to remove this extra cpu usage for the appropriate MISP outputs. The work is ongoing in the branch i mentioned above.

      Are you running multiple instances of QVIO, one for each camera?

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: External pwm ESC questions

      @mkriesel , a few tips..

      • I suspect FETtec ESC is using a pretty accurate xtal or accurate RC oscillator, so you should not need to calibrate the ESC
      • You can check the current calibration by using QGC to control the individual motors to see at which command they start up and make sure that is very close to the same (typically around 1000us or a bit more)
      • Additionally, you can spin up each motor with propeller at certain % power, lets say 20% or 30% and using an optical tachometer (RPM meter) to measure the speed of each motor
      • Those are some sanity checks you can do without being able to calibrate and without having any ESC telemetry.

      Regarding the ESC PWM range, the procedure is documented for VOXL2 IO board : https://docs.modalai.com/voxl2-io-user-guide/#how-to-perform-esc-calibration -- please go over that and make sure you did it correctly.
      Alex

      posted in ESCs
      Alex KushleyevA
      Alex Kushleyev
    • RE: Flir Boson+ Application v4.2 install file not available anymore - does anyone have it please?

      @saegsali , yes you are correct. Unfortunately, the USB port J3 is not populated on M0201 - I believe the decision was made to omit this connector due to some mechanical constraints, I am not sure.

      A couple of things to consider:

      • if you purchase this connector (BM06B-SRSS-TBT), you can solder it on and plug a USB connection in to it. I have not tried it, but I will. I don't see why it would not work, it should be powered from the USB port.
      • if you have any other adapter (from FLIR, etc) which provides a USB port, the function should be the same and you could just plug that into the Boson and use it for the sensor configuration.
      • we are developing a tool for communicating with Boson directly from VOXL2, but it is not yet to the point of performing full configuration. So you should try to use the GUI for the initial setup. Perhaps with the correct connection the GUI version 4.6 should work, otherwise there may be an issue with the GUI itself.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Python Programmatic GStreamer Access for Hardware Encoded Acceleration and Low Latency

      Hi @joseph-vale , i tested the python script from FLIR help site (Radiometry.py). I just had to modify it to use the correct USB and Video devices. The script ran find, but since my Boson does not support radiometric output, the reported temperature was like 70 degrees colder than it should be (reporting -50C at room temperature). Are you able to get correct temperatures with your device using this script?

      As i mentioned before, there is a way of getting the image data from voxl-camera-server into python. I think it would be interesting to try running the same exact conversion and annotation code from the FLIR example. This would allow you to first check the temperatures using a USB connection and then check them using the VOXL2 pipeline.

      I am going to set up an example that that uses pympa (python wrapper for MPA) to get the 16bit data from Boson and plot + convert it to temperature using the reference code.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Python Programmatic GStreamer Access for Hardware Encoded Acceleration and Low Latency

      @joseph-vale ,

      to enable voxl-streamer and voxl-camera-server on startup, just use the following commands:

      • systemctl enable voxl-camera-server
      • systemctl enable voxl-streamer

      Regarding your question about thermal radiometric readings, i am not sure - can you please elaborate? The default post-AGC 8-bit mode sends a monochrome processed image. The pixel value is related to the temperature, but the image itself does not provide the mapping from pixel value to temperature. Also, not all Boson units support outputting radiometric data.

      I don't have much experience with this aspect (and I don't think we have any Bosons with radiometric output capability). Looking at some FLIR help, it seems that you have to use the 16 bit output (well it's actually 14 bit) and turn on linear T output and then the conversion from RAW pixel value (16 bit) to degrees is simple : https://flir.custhelp.com/app/answers/detail/a_id/3387/~/flir-oem---boson-video-and-image-capture-using-opencv-16-bit-y16

      If this is the case, then here is how this could be tested (high level steps - don't worry if you don't know how to implement them at this point) :

      • set up Boson to correct configuration (output RAW14, linear T, etc) using the FLIR SDK (using USB)
      • configure VOXL2 to use boson driver that accepts 14 bit data (not 8-bit, which is default)
      • voxl-camera-server will publish RAW16 unmodified images to an mpa pipe
      • a client application can receive the RAW16 frame and apply the temperature conversion and publish the image that reflects certain temperature -> color mapping. Then this image can be used by voxl-streamer to be encoded with h264 / h265.

      I have not actually tried that script (at the bottom of that help article) -- i wonder what would happen if i use it with a Boson that does not support radiometric output. Do you know?

      I can help set this up if i can test it using non-radiometric Boson. It seems the conversion is straightforward, I could potentially add the support for this directly into voxl-camera-server.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Minimizing voxl-camera-server CPU usage in SDK1.6

      @Rowan-Dempster ,

      Yes, QVIO only runs as a 32-bit app due to the nature of the library from Qualcomm.

      I tried to build the voxl-image-repub application for 32 bit and got the following error:

      /usr/bin/arm-linux-gnueabi-ld: CMakeFiles/voxl-image-repub.dir/voxl-image-repub.cpp.o: in function `main':
      voxl-image-repub.cpp:(.text.startup+0x228): undefined reference to `pipe_client_set_ion_buf_helper_cb'
      

      So it seems like the 32-bit version of libmodal-pipe does not support sending ION buffers.

      I just checked with the team - even though we have not tested the ION buffer sharing in 32-bit environment, it should work. You could try to build libmodal-pipe library and enable ION support : https://gitlab.com/voxl-public/voxl-sdk/core-libs/libmodal-pipe/-/blob/master/build.sh?ref_type=heads#L76 .

      Then you would need to install that new library into your docker container where you are building your app, as well as deploy to VOXL2.

      BTW in order to build the tools in voxl-mpa-tools i needed to disable -Werror and comment out a few targets like voxl-convert-image and voxl-inspect-cam-ascii due to lack of 32-bit version of opencv.

      So.. if you really wanted the QVIO app to use the shared ION buffers, you would have to go that route..

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: M0201 gimbal passthrough pinout

      @Zachary-Lowell-0 and @smiley ,

      I just updated the docs to include more info on this https://docs.modalai.com/M0153/#misc-connector-pinout

      Also verified that i could ping a i2c device connected to M0201 J4 and M0201 itself is connected to VOXL2 J7 via M0181 (there are other options how to connect M0201 to VOXL).

      IMU -> [JST GH cable] -> (M0201 J4) .... (M0201 J2) -> [MIPI COAX cable] -> (M0181 J1) .... (M0181) -> (VOXL2 J7)

      voxl2:/$ i2cdetect -r -y 4
           0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
      00:          -- -- -- -- -- -- -- -- -- -- -- -- -- 
      10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
      20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
      30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
      40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
      50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 
      60: -- -- -- -- -- -- -- -- 68 -- -- -- -- -- -- -- 
      70: -- -- -- -- -- -- -- --
      

      (able to ping an i2c IMU board connected to /dev/i2c-4).

      Please let me know if you have any additional questions.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev