ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Alex Kushleyev
    • Profile
    • Following 0
    • Followers 11
    • Topics 0
    • Posts 1812
    • Best 95
    • Controversial 1
    • Groups 1

    Alex Kushleyev

    @Alex Kushleyev

    ModalAI Team

    97
    Reputation
    250
    Profile views
    1812
    Posts
    11
    Followers
    0
    Following
    Joined Last Online

    Alex Kushleyev Unfollow Follow
    ModalAI Team

    Best posts made by Alex Kushleyev

    • RE: ToF v2 keeps crashing because of high temperature

      @dlee ,

      Yes the new TOF sensor (IRS2975C) is more powerful that the previous generation. What I mean by that is that it can emit more IR power but also heats up more. Emitting more power allows the sensor detect objects at larger distances or objects that are not as reflective.

      In current operating mode, the auto exposure control is enabled inside the sensor itself, which modulates the emitted IR power based on the returns that the sensor is getting. That is to say, the power draw will vary depending on what is in the view of the sensor. If there are obstacles nearby, the output power should be low, otherwise it can be high. At full power, the module can consume close to 0.8-0.9W

      So the first solution, if design allows, is to add a heat spreader to dissipate the heat, which you already started experimenting with. The sensor has a large exposed copper pad in the back for heat sinking purposes for this exact reason. Just be careful not to short this pad to anything, use non-conducting (but heat transfering) adhesive pad between the sensor and heat spreader.

      In terms of a software solution to the issue, we can query the temperature of the emitter. We can also control the maximum emitted power used by the auto exposure algorithm. That is to say, still leave the auto exposure running in the sensor, but limit the maximum power that it is allowed to use.

      We are planning to add some software protection that limits the maximum output power as a function of the emitter temperature. This will require some implementation and testing.

      Meanwhile, please consider using a heat spreader, which will be the best solution if you want to make use of the full sensor's operating range and not have our software limit the output power in order to prevent overheating.

      posted in Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Propeller Coefficients for Starling V2

      Hello @Kashish-Garg-0

      we have a curve that is "motor voltage vs rpm", meaning that for a desired RPM, it tells the ESC what average motor voltage should be applied. The average motor voltage is defined as battery_voltage * motor_pmw_duty_cycle. The battery voltage in this curve is in millivolts. Since you are typically controlling the desired RPM, as a user you do not need to worry about what "throttle" or voltage to apply - the ESC does this automatically in order to achieve the desired RPM. this calibration curve is used as a feed-forward term in the RPM controller. The ESC does support an "open loop" type of control where you specify the power from 0 to 100%, which is similar to a standard ESC, but PX4 does not use that ESC control mode.

      By the way, you can test the ESC directly (not using PX4) using our voxl-esc tools (https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-esc/-/tree/master/voxl-esc-tools) which works directly on VOXL2 or a standalone linux PC (or mac). voxl-esc-spin.py has a --power argument where you specify the power from 0 to 100, which translates directly to the average duty cycle applied to the motor.

      Here is the calibration for the Starling V2 motor / propeller that we use:
      https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-esc/-/blob/master/voxl-esc-params/mavic_mini_2/mavic_mini_2.xml?ref_type=heads#L63

      Also, you can take a look at this post to see how to interpret those parameters a0, a1, a2 : https://forum.modalai.com/topic/2522/esc-calibration/2

      We also have some dyno tests for this motor / propeller : https://gitlab.com/voxl-public/flight-core-px4/dyno_data/-/blob/master/data/mavic_mini2_timing_test/mavic_mini2_modal_esc_pusher_7.4V_timing0.csv . We are not sure how accurate that is, but it can be used as a starting point. @James-Strawson can you please confirm that is the correct dyno data for the Starling V2 motors?

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Sending Recorded Video Though Camera Server on VOXL2

      @reber34 , perhaps this approach can work for you:

      • record a video encoded at high bit rate (using voxl-camera-server and voxl-record-video . Please note that the output of voxl-record-video will not be in a standard container (such as mp4, etc), but you can fix it with ffpeg : ffmpeg -r 30 -i voxl-record-video.h264 -codec copy videofile.mp4
      • re-encode the video offline with desired codecs / bit rates / resolutions
      • install gst-rtsp-launch which uses gstreamer to set up an RTSP stream https://github.com/sfalexrog/gst-rtsp-launch/
        • you will first need to figure out what gstreamer pipeline to use on voxl2 that will load your video and parse the h264/h265 frames (can use null sink for testing) and then use that pipeline with gst-rtsp-launch which will take the encoded frames and serve them over rtsp stream.
      • gstreamer may be more flexible for tuning the encoding parameters of h264/h265 (compared to voxl-camera-server) and you can also use it in real time later (using voxl-streamer, which uses gstreamer under the hood)

      Another alternative is to use voxl-record-raw-image to save raw YUVs coming from voxl-camera-server and then use voxl-replay and voxl-streamer - the latter will accept YUVs from the MPA pipe and encode them using the bit rate that you want. Note that depending on the image resolution, YUV images will take a lot more space than encoded video, but maybe that is also OK since VOXL2 has lots of storage.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: voxl_mpa_to_ros2 camera_interface timestamp

      @smilon ,

      I believe you are correct! Thank you. We will double check this and fix.

      posted in ROS
      Alex KushleyevA
      Alex Kushleyev
    • RE: HiRes camera extrinsics config

      @Gary-Holmgren , you are correct, the high resolution camera is not used for VIO, we typically use it just for video recording / streaming.

      You can certainly add a new transform to the extrinsics file and use it in your application. You should use the same name for the camera in the extrinsics file as you name it in voxl-camera-server.conf just to be consistent.

      posted in Support Request Format for Best Results
      Alex KushleyevA
      Alex Kushleyev
    • RE: OV7251 RAW10 format

      Hello @Gicu-Panaghiu,

      I am going to assume you are using VOXL1, since you did not specify..

      We do have RAW8 and RAW10 support for OV7251. The selection of the format has to be done in several places.

      First, you have to select the correct camera driver, specifically..

      ls /usr/lib/libmmcamera_ov7251*.so
      /usr/lib/libmmcamera_ov7251.so
      /usr/lib/libmmcamera_ov7251_8bit.so
      /usr/lib/libmmcamera_ov7251_hflip_8bit.so
      /usr/lib/libmmcamera_ov7251_rot180_8bit.so
      /usr/lib/libmmcamera_ov7251_vflip_8bit.so
      

      there are 5 options and one of them is _8bit.so which means it will natively ouptput 8bit data (all others output 10 bit data).

      the driver name, such as ov7251_8bit has to be the sensor name <SensorName>ov7251_8bit</SensorName> in /system/etc/camera/camera_config.xml.

      You can check camera_config.xml for what sensor library is used for your OV7251.

      When you run voxl-configure-cameras script, it will actually copy one of the default camera_config.xml that are set up for a particular use case, and I believe it will indeed select the 8bit one - this was done to save cpu cycles needed to convert 10bit to 8bit, since majority of the time only 8bit pixels are used.

      Now, you mentioned that HAL_PIXEL_FORMAT_RAW10 is passed to the stream config and unfortunately this does not have any effect on what the driver outputs. If the low level driver (e.g. libmmcamera_ov7251_8bit.so) is set up to output RAW8, it will output RAW8 if you request either HAL_PIXEL_FORMAT_RAW8 or HAL_PIXEL_FORMAT_RAW10.

      So if you update the camera_config.xml to the 10bit driver and just keep the HAL_PIXEL_FORMAT_RAW10 in the stream config (then sync and reboot), you should be getting a 10 bit RAW image from the camera. But since the camera server is currently expecting 8 bit image, if you just interpret the image as 8 bit, it will appear garbled, so you will need to handle the 10 bit image (decide what you want to do with it) in the camera server.

      posted in Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Tracking camera calibration not progressing

      @KnightHawk06 , use voxl-calibrare-camera tracking_down_misp_grey <remaining options>

      posted in VOXL-CAM
      Alex KushleyevA
      Alex Kushleyev
    • RE: Cannot change TOF framerate

      The ipk is available here now : http://voxl-packages.modalai.com/stable/voxl-hal3-tof-cam-ros_0.0.5.ipk - you should be able to use the launch file to choose between two modes (5=short range and 9=long range) and fps, which are listed in the launch file.

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)

      @Prabhav-Gupta , yes it seems like OpenCV and ROS YUV_NV12 formats do not match up. I will take a look at it.. it seems the ROS YUV is packed (interleaved) while standard for storing YUV NV12 is having two planes : plane 1 : Y (size: widthheight), plane 2 : UV (size: widthheight/2)

      In the meantime.. you can stream a rtsp h264/h265 from VOXL (use decent quality so that image looks good) and use opencv to receive the stream and get decompressed images: https://stackoverflow.com/questions/40875846/capturing-rtsp-camera-using-opencv-python

      Would that work for you ? (unfortunately with rtsp stream, you will not get the full image metadata, like exposure, gain, timestamp, etc).

      RTSP streaming can be done using voxl-streamer, which can accept either a YUV (and encode it) or already encoded h264/5 stream from voxl-camera-server.

      Alex

      posted in ROS
      Alex KushleyevA
      Alex Kushleyev
    • RE: Image streaming slowdown when using voxl-mpa-to-ros2

      @cfirth , if you are streaming raw RGB images, then 6404803 is 921KB, at 30fps will be 27MB/s which is definitely more than you want to transfer over wifi for a single stream. You can measure the available bandwidth using iperf as i suggested in previous post. Consider using compressed image format in ROS or use h264/5 encoded streams. You can use voxl-streamer to encode the tracking frames into a streaming video. If you describe your use case in more detail, we can provide additional suggestions.

      posted in Support Request Format for Best Results
      Alex KushleyevA
      Alex Kushleyev

    Latest posts made by Alex Kushleyev

    • RE: Status of Image Stabilization and Potentially Zoom?

      voxl-portal, as demonstrated in the videos linked above, has been updated in the eis-integration branch (merged latest from dev branch, so it can be built with the latest voxl-cross:4:4 now).

      https://gitlab.com/voxl-public/voxl-sdk/services/voxl-portal/-/tree/eis-integration

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Boson 640 MIPI M0153 16-bit

      @mkriesel ,

      I did a basic test of configuring Boson for external sync using bosonSetExtSyncMode(FLR_BOSON_EXT_SYNC_MODE_E.FLR_BOSON_EXT_SYNC_SLAVE_MODE) in the FLIR python API.

      If the sync signal is absent, then Boson just stops sending frames (as expected). Once I installed the missing resistor on the ModalAI Boson adapter to patch the common 30FPS camera sync signal from VOXL2 to Boson, I started receiving frames from Boson at 30FPS. Adjusting the sync signal from 30 to 60FPS resulted in some FPS instability (as well as some minor flickering artifacts) in Boson frames due to the fact that the sync signal coming out of voxl2 has some jitter. Boson timing at 60FPS is very tight, so sync signal coming in too early can cut off the previous frame, so i believe that was happening.

      I also noticed that when the sync signal was at 30FPS, the FFC would take twice as long, which is interesting. I wonder if Boson indeed expects 60FPS sync signal and perhaps something may not work properly otherwise. will need to check Boson documentation..

      Also keep in mind that the tracking cameras (AR0144) cannot operate at 60FPS in sync'ed mode (different story). So, if you are using sync'ed AR0144 tracking cameras and would like to use 60FPS sync'ed Boson, this may not be possible with the same sync line.

      I will need to experiment with this some more. Can you please elaborate on the use of Boson sync in your application (and whether you need VOXL2 to generate the sync signal or you have your own sync signal)?

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Status of Image Stabilization and Potentially Zoom?

      @jameskuesel ,

      The documentation correctly lists the input resolution as 4040x3040. Even though the camera's maximum resolution is 4056x3040, we can set up the camera driver to request any smaller resolution from the camera. The reason for using width=4040 is that the hardware-optimized debayering functions on the GPU require a specific row alignment (in bytes) and the width of 4040 results in the line stride that is compatible with the gpu functions. If width of 4056 is selected, the camera does send a valid RAW10 bayer image, but the gpu cannot correctly de-bayer it.

      You can view all the raw supported resolutions by running voxl-camera-server -l and take a look at the following (which i am pasting for the IMX412 driver that you should be using). You will see duplicate resolutions because there are different FPS supported at the same resolution (the FPS are not printed).

      ANDROID_SCALER_AVAILABLE_RAW_SIZES:
      These are likely supported by the sensor
      4056 x 3040
      4040 x 3040
      4040 x 3040
      3840 x 2160
      3840 x 2160
      3840 x 2160
      1996 x 1520
      1996 x 1520
      1996 x 1520
      1936 x 1080
      1936 x 1080
      1936 x 1080
      1936 x 1080
      1936 x 1080
      1996 x  480
      1996 x  480
      1996 x  240
      1996 x  240
      

      The details about all the camera modes supported by the imx412 EIS driver can be found here : https://docs.modalai.com/camera-video/low-latency-video-streaming/#imx412-operating-modes

      Please follow the instructions step by step and EIS should work.

      EIS zoom and drag is supported. You can test zoom easily by using voxl-portal, click a small check box in bottom left corner to enable advanced panel. You can use mouse wheel or zoom slider to control the zoom.

      See demo videos here :
      ROI features: https://www.youtube.com/watch?v=FXv4855WjNc
      EIS: https://www.youtube.com/watch?v=fi2BO_U5f-c

      (and a screenshot):
      1c60eedc-318d-48c4-a65a-b90d4f62ab18-image.png

      Zoom will work with the default version of voxl-portal, but in order to use the drag feature, you need to use voxl-portal from eis-integration branch : https://gitlab.com/voxl-public/voxl-sdk/services/voxl-portal/-/tree/eis-integration . Actually it looks like it has not been updated in a while, let me check if it still builds fine and i can share a deb.

      Under the hood, the voxl-portal GUI sends commands to voxl-portal back-end running on VOXL2, and that, in turn, sends commands to the camera server to change the zoom and drag. For zoom, indeed a command set_misp_zoom is used, such as:

      voxl-send-command hires_misp_color set_misp_zoom <zoom> <optional: convergence_delay>
      

      zoom should be 1.0 or greater, convergence delay 0.0-1.0 -- (0.0 instant, 1.0 : infinitely long)

      By the way zoom also works with MISP when EIS is disabled.

      Let me check to make sure the voxl-portal from the EIS branch works and I will also document the zoom / drag command syntax.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: VOXL ESC Mini 4-in-1 not detected issue

      @mkriesel , just to confirm, does this test work (run) on other voxl2 boards for you?

      posted in VOXL Flight Deck
      Alex KushleyevA
      Alex Kushleyev
    • RE: Boson 640 MIPI M0153 16-bit

      @mkriesel , our M0153 / M0201 adapter does have an option to connect Boson to the common sync line that is used for synchronizing AR0144 tracking cameras (GPIO 109). This line is typically driven at 30Hz, but the frequency is adjustable.

      There is a resistor pad on M0153 / M0201 adapter which is DNI by default, so a 0-ohm resistor would need to be installed to enable this functionality. However, we have not verified that it works.

      It is on our to-do list to verify the Boson sync. Once we confirm that it works, I can provide instructions for installing the resistor if you wanted to enable that functionality. Boson's configuration will also need to be updated to enable the sync input, but it's not a big change.

      Let me get back to you next week..

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: NTP versus Systemd-timesyncd on the VOXL2

      @Aaron-Porter , perhaps systemd-timesyncd works together with systemd to correctly interpret the clock adjustments, so that run-time of systemd services is properly calculated.

      Alex

      posted in VOXL 2
      Alex KushleyevA
      Alex Kushleyev
    • RE: Boson 640 MIPI M0153 16-bit

      Also, a quick update.. after some additional testing, we are actually able to communicate with Boson via the CCI connection directly from VOXL2. Hopefully we will be able to enable (partial) Boson configuration from VOXL2 directly, avoiding the use of the USB connection for simple configuration change.

      Here is a sample script that can be used to test triggering FFC. The script uses cci-direct library which allows to communicate to cameras while they are running (in camera server).

      Before running this script, make sure the camera server is running and it is configured to enable Boson camera (and take note of the camera id (sw id, not slot)), which is needed by cci-direct and the script. This is also abit of a hack since we are writing part of the payload as the the address in addressed write (as it is common in i2c transactions), but Boson's i2c interface does not support addressing.

      #!/bin/bash
      
      if [ "$#" -eq 0 ]; then
      	echo "Error: No arguments provided."
      	echo "Usage: $0 <boson camera id>"
      	exit 1
      fi
      
      cam_id=$1
      
      #command to trigger FFC
      #(extra two 00's at the end to make it a multiple of 3, since we are sending 3 bytes at a time in this test)
      
      data="8E A1 00 0C 00 00 00 FC 00 05 00 07 FF FF FF FF 00 00"
      
      IFS=' ' read -r -a words <<< "$data"
      
      for (( i=0; i<${#words[@]}; i+=3 )); do
      	echo "${words[i]} ${words[i+1]} ${words[i+2]}"
      	voxl-cci-direct -c ${cam_id} -w 0x${words[i]}${words[i+1]} 0x${words[i+2]}
      done
      
      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Boson 640 MIPI M0153 16-bit

      Hello @mkriesel ,

      Please see below instructions for test 14 bit Boson data stream on VOXL2:

      • voxl-camera-server : use branch https://gitlab.com/voxl-public/voxl-sdk/services/voxl-camera-server/-/tree/add-boson-14bit-support
      • latest Boson drivers (8 bit unchanged, added 14 bit drivers) : link
      • python scripts to read Boson config, set to 8 bit, set to 14 bit :
        • read config
        • set 8 bit
        • set 14 bit

      Boson 14 bit release notes

      • 14 bit data is published at RAW16 format (unpacked from 14 bit).
      • The boson_bayer image stream can be viewed with voxl-portal, but will appear dark grey because the data is not scaled to full 16 bit range. voxl-portal will just convert 16->8 bit with a bit shift.
      • 8-bit sensormodule has to be used to receive 8 bit data : com.qti.sensormodule.boson_X.bin (replace X with camera slot id)
      • 14-bit sensormodule has to be used to receive 14 bit data : com.qti.sensormodule.boson_14bit_X.bin (replace X with camera slot id)

      Configuring Boson to 14 bit:

      • use SDK4.0_Boson_Plus_SDK
      • connect Boson to Linux PC via USB
      • run python3 boson_read.py to check the current settings
      • run python3 boson_set_14bit.py to set to 14 bit output (check FPS setting in the script)
      • run python3 boson_set_8bit.py to set to 14 bit output (check FPS setting in the script)
      • using incorrect sensormodule (which does not match the bit resolution actual Boson output),
        will result in the following message in camera server and no data will be received from Boson:
      ERROR:   Received "Buffer" error from camera: boson
      
      • 30 or 60 FPS works with 8 and 14 bit. The fps setting in camera server config does not change FPS, but you should make the value in the voxl camera server config file consistent with the actual setting (otherwise video encoder bandwidth may be incorrect, if you use that)
      • using Boson configured for 14 bit with version of camera server which does not support 14 bit will result in VOXL2 board crash due to insufficient buffer size allocated for the incoming frames (buffer overflow)
      • use the camera server config shared above (with misp enabled for Boson camera)

      voxl-camera-server will detect the frame size and output either:

      camera boson640 detected frame size: 640x512 327680 bytes (819200 alloc).. raw bpp: 8
      

      or

      camera boson640 detected frame size: 640x512 573440 bytes (819200 alloc).. raw bpp: 14
      

      Please give it a try. We will merge this to the dev branch of camera server soon.

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: NTP versus Systemd-timesyncd on the VOXL2

      @Aaron-Porter , I believe the time sync services are interchangeable, such as systemd-timesyncd, ntpd, chronyd. I don't think we have tried using anything other than the default systemd-timesyncd on voxl2.

      If you try those out, please let us know if you find any issues! Thanks

      Alex

      posted in VOXL 2
      Alex KushleyevA
      Alex Kushleyev
    • RE: Batteries keep overheating and ultimately break beyond 45 minutes of being plugged in (non flight)

      @Allister-Lim ,

      If you are not monitoring the battery voltage while working with your battery-powered VOXL, it is possible that you are over-discharging the batteries to the point of failure. Especially with LiPo batteries, it is unsafe to over-discharge them, so please be careful. (It seems that you are also using LiPo batteries, in addition to ModalAI Li-Ion batteries also shown (green)). There is nothing wrong with using LiPo batteries, but you have to be more careful with those. Over-discharged batteries lose capacity and Lipo Batteries may become dangerous.

      It is a good idea to use an appropriate DC power supply when working with VOXL2 while it is powered on continuously. 12V 3-5A Power supply, connected to the APM (which will provide 5V for VOXL) should be sufficient. You should have no problem finding one, but in case you need, here is a link to the AC adapter that we sell : https://www.modalai.com/collections/accessories/products/ps-xt60

      Alex

      posted in Power Modules
      Alex KushleyevA
      Alex Kushleyev