ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Alex Kushleyev
    • Profile
    • Following 0
    • Followers 11
    • Topics 0
    • Posts 1857
    • Best 96
    • Controversial 1
    • Groups 1

    Alex Kushleyev

    @Alex Kushleyev

    ModalAI Team

    98
    Reputation
    265
    Profile views
    1857
    Posts
    11
    Followers
    0
    Following
    Joined Last Online

    Alex Kushleyev Unfollow Follow
    ModalAI Team

    Best posts made by Alex Kushleyev

    • RE: ToF v2 keeps crashing because of high temperature

      @dlee ,

      Yes the new TOF sensor (IRS2975C) is more powerful that the previous generation. What I mean by that is that it can emit more IR power but also heats up more. Emitting more power allows the sensor detect objects at larger distances or objects that are not as reflective.

      In current operating mode, the auto exposure control is enabled inside the sensor itself, which modulates the emitted IR power based on the returns that the sensor is getting. That is to say, the power draw will vary depending on what is in the view of the sensor. If there are obstacles nearby, the output power should be low, otherwise it can be high. At full power, the module can consume close to 0.8-0.9W

      So the first solution, if design allows, is to add a heat spreader to dissipate the heat, which you already started experimenting with. The sensor has a large exposed copper pad in the back for heat sinking purposes for this exact reason. Just be careful not to short this pad to anything, use non-conducting (but heat transfering) adhesive pad between the sensor and heat spreader.

      In terms of a software solution to the issue, we can query the temperature of the emitter. We can also control the maximum emitted power used by the auto exposure algorithm. That is to say, still leave the auto exposure running in the sensor, but limit the maximum power that it is allowed to use.

      We are planning to add some software protection that limits the maximum output power as a function of the emitter temperature. This will require some implementation and testing.

      Meanwhile, please consider using a heat spreader, which will be the best solution if you want to make use of the full sensor's operating range and not have our software limit the output power in order to prevent overheating.

      posted in Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Propeller Coefficients for Starling V2

      Hello @Kashish-Garg-0

      we have a curve that is "motor voltage vs rpm", meaning that for a desired RPM, it tells the ESC what average motor voltage should be applied. The average motor voltage is defined as battery_voltage * motor_pmw_duty_cycle. The battery voltage in this curve is in millivolts. Since you are typically controlling the desired RPM, as a user you do not need to worry about what "throttle" or voltage to apply - the ESC does this automatically in order to achieve the desired RPM. this calibration curve is used as a feed-forward term in the RPM controller. The ESC does support an "open loop" type of control where you specify the power from 0 to 100%, which is similar to a standard ESC, but PX4 does not use that ESC control mode.

      By the way, you can test the ESC directly (not using PX4) using our voxl-esc tools (https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-esc/-/tree/master/voxl-esc-tools) which works directly on VOXL2 or a standalone linux PC (or mac). voxl-esc-spin.py has a --power argument where you specify the power from 0 to 100, which translates directly to the average duty cycle applied to the motor.

      Here is the calibration for the Starling V2 motor / propeller that we use:
      https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-esc/-/blob/master/voxl-esc-params/mavic_mini_2/mavic_mini_2.xml?ref_type=heads#L63

      Also, you can take a look at this post to see how to interpret those parameters a0, a1, a2 : https://forum.modalai.com/topic/2522/esc-calibration/2

      We also have some dyno tests for this motor / propeller : https://gitlab.com/voxl-public/flight-core-px4/dyno_data/-/blob/master/data/mavic_mini2_timing_test/mavic_mini2_modal_esc_pusher_7.4V_timing0.csv . We are not sure how accurate that is, but it can be used as a starting point. @James-Strawson can you please confirm that is the correct dyno data for the Starling V2 motors?

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Sending Recorded Video Though Camera Server on VOXL2

      @reber34 , perhaps this approach can work for you:

      • record a video encoded at high bit rate (using voxl-camera-server and voxl-record-video . Please note that the output of voxl-record-video will not be in a standard container (such as mp4, etc), but you can fix it with ffpeg : ffmpeg -r 30 -i voxl-record-video.h264 -codec copy videofile.mp4
      • re-encode the video offline with desired codecs / bit rates / resolutions
      • install gst-rtsp-launch which uses gstreamer to set up an RTSP stream https://github.com/sfalexrog/gst-rtsp-launch/
        • you will first need to figure out what gstreamer pipeline to use on voxl2 that will load your video and parse the h264/h265 frames (can use null sink for testing) and then use that pipeline with gst-rtsp-launch which will take the encoded frames and serve them over rtsp stream.
      • gstreamer may be more flexible for tuning the encoding parameters of h264/h265 (compared to voxl-camera-server) and you can also use it in real time later (using voxl-streamer, which uses gstreamer under the hood)

      Another alternative is to use voxl-record-raw-image to save raw YUVs coming from voxl-camera-server and then use voxl-replay and voxl-streamer - the latter will accept YUVs from the MPA pipe and encode them using the bit rate that you want. Note that depending on the image resolution, YUV images will take a lot more space than encoded video, but maybe that is also OK since VOXL2 has lots of storage.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: voxl_mpa_to_ros2 camera_interface timestamp

      @smilon ,

      I believe you are correct! Thank you. We will double check this and fix.

      posted in ROS
      Alex KushleyevA
      Alex Kushleyev
    • RE: SDK 1.3.3 voxl-inspect-cam -a latency bug, inspecting large stream

      @John-Nomikos , I think something is off with your setup, because even the small encoded stream cannot achieve 30fps. I just did a quick test on my voxl2, able to maintain 30fps in this stress test.

      d90684e2-4d71-4337-afea-d8acaf24eb3f-image.png

      Something is bogging down the system, you should look at output of top to see if there is some other process that is using up a lot of cpu.

      If you cannot figure out the cause, please any other info that might help about your setup, anything non-standard that you might be using in voxl-camera-server.

      Alex

      posted in VOXL SDK
      Alex KushleyevA
      Alex Kushleyev
    • RE: OV7251 RAW10 format

      Hello @Gicu-Panaghiu,

      I am going to assume you are using VOXL1, since you did not specify..

      We do have RAW8 and RAW10 support for OV7251. The selection of the format has to be done in several places.

      First, you have to select the correct camera driver, specifically..

      ls /usr/lib/libmmcamera_ov7251*.so
      /usr/lib/libmmcamera_ov7251.so
      /usr/lib/libmmcamera_ov7251_8bit.so
      /usr/lib/libmmcamera_ov7251_hflip_8bit.so
      /usr/lib/libmmcamera_ov7251_rot180_8bit.so
      /usr/lib/libmmcamera_ov7251_vflip_8bit.so
      

      there are 5 options and one of them is _8bit.so which means it will natively ouptput 8bit data (all others output 10 bit data).

      the driver name, such as ov7251_8bit has to be the sensor name <SensorName>ov7251_8bit</SensorName> in /system/etc/camera/camera_config.xml.

      You can check camera_config.xml for what sensor library is used for your OV7251.

      When you run voxl-configure-cameras script, it will actually copy one of the default camera_config.xml that are set up for a particular use case, and I believe it will indeed select the 8bit one - this was done to save cpu cycles needed to convert 10bit to 8bit, since majority of the time only 8bit pixels are used.

      Now, you mentioned that HAL_PIXEL_FORMAT_RAW10 is passed to the stream config and unfortunately this does not have any effect on what the driver outputs. If the low level driver (e.g. libmmcamera_ov7251_8bit.so) is set up to output RAW8, it will output RAW8 if you request either HAL_PIXEL_FORMAT_RAW8 or HAL_PIXEL_FORMAT_RAW10.

      So if you update the camera_config.xml to the 10bit driver and just keep the HAL_PIXEL_FORMAT_RAW10 in the stream config (then sync and reboot), you should be getting a 10 bit RAW image from the camera. But since the camera server is currently expecting 8 bit image, if you just interpret the image as 8 bit, it will appear garbled, so you will need to handle the 10 bit image (decide what you want to do with it) in the camera server.

      posted in Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Tracking camera calibration not progressing

      @KnightHawk06 , use voxl-calibrare-camera tracking_down_misp_grey <remaining options>

      posted in VOXL-CAM
      Alex KushleyevA
      Alex Kushleyev
    • RE: Cannot change TOF framerate

      The ipk is available here now : http://voxl-packages.modalai.com/stable/voxl-hal3-tof-cam-ros_0.0.5.ipk - you should be able to use the launch file to choose between two modes (5=short range and 9=long range) and fps, which are listed in the launch file.

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)

      @Prabhav-Gupta , yes it seems like OpenCV and ROS YUV_NV12 formats do not match up. I will take a look at it.. it seems the ROS YUV is packed (interleaved) while standard for storing YUV NV12 is having two planes : plane 1 : Y (size: widthheight), plane 2 : UV (size: widthheight/2)

      In the meantime.. you can stream a rtsp h264/h265 from VOXL (use decent quality so that image looks good) and use opencv to receive the stream and get decompressed images: https://stackoverflow.com/questions/40875846/capturing-rtsp-camera-using-opencv-python

      Would that work for you ? (unfortunately with rtsp stream, you will not get the full image metadata, like exposure, gain, timestamp, etc).

      RTSP streaming can be done using voxl-streamer, which can accept either a YUV (and encode it) or already encoded h264/5 stream from voxl-camera-server.

      Alex

      posted in ROS
      Alex KushleyevA
      Alex Kushleyev
    • RE: Minimal example of using camera_cb

      Hi

      If you are looking for more low level functionality and a bit easier to experiment with, you may want to check out https://gitlab.com/voxl-public/utilities/voxl-rtsp/-/tree/dev . This tool is good for testing or running simple scenarios. This tool accepts some commands (like exposure control) via command line, so you could add a simple feature to save an image upon a command line input. Please see README to see how the exposure control input works and code for how it's implemented.

      Please make sure that voxl-camera-server is disabled when you run voxl-rtsp.

      If you still would like to go through the voxl-camera-server approach, we may need some more input from respective devs :).

      I hope this helps..

      Alex

      posted in VOXL m500 Reference Drone
      Alex KushleyevA
      Alex Kushleyev

    Latest posts made by Alex Kushleyev

    • RE: tracking down pipe switching to images of traccking front camera

      @mark ,

      Thank you for running the tests!

      Regarding installing resistors without knowing what the function is - not recommended 🙂 . I will look into what this resistor is. Are you saying that after installing this resistor, the issue is not reproducible?

      I was going to ask you to do one more test, if you can reproduce the issue. If the issue is indeed at the very low level (the same image is returned for both cameras into voxl-camera-server), then both instances of Auto Exposure algorithm would react to one camera's image changing. So the test would be..

      • reproduce the issue to get the same image appear in both camera streams, it seems like tracking_front is the one that is being duplicated
      • note the exposure and gain values, reported in the stats below the images
      • quickly cover up the front camera (without affecting down-facing camera) and see if the expsure / gain of tracking_down also changes at all.
        • please note that if the image is indeed duplicated at a very low level, the Auto Exposure algorithm's output for the tracking_down will not actually affect the image (since the stream is from tracking_front), but you should still see some changes while AE is converging after a sudden change. If there is no duplication of the image at the camera server, the AE behavior will not change for tracking_down camera when you cover up tracking_front. I hope that makes sense.

      Basically i am trying to figure out if this is a camera server issue or somewhere downstream.

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: tracking down pipe switching to images of traccking front camera

      @mark, thank you for the details. It should not matter whether M0054 or M0154 voxl is used.

      Can you please do a test and disable both hires cameras and see if the issue is still reproducible? If the issue is gone, then enable one hires camera and test again.

      Also, does setting cpu to perf mode help with this? (Voxl-set-cpu-mode perf)

      I have never seen this happen with out stable camera configs, such as C28, so this is very odd!

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Setting up RTK with VOXL2 using F9P (Base + Rover)

      Hello @Teon,

      This is more of a general question, not specifically related to VOXL2. However, here is some information that you might find helpful.

      Base station

      • Raspberry Pi
      • ZED-F9P receiver with dual band helical antenna, connected to RPI via USB
      • configure ZED-F9P to be a in base station mode (will require data collection for localizing the base station precisely). Check Ublox docs
      • configure ZED-F9P to output RTCM3 corrections over usb to RPI
      • run NTRIP caster software (you will need to find one..) which will connect to USB port and serve the RTCM3 data to the clients via NTRIP protocol
        -- this would require an NTRIP client on the client side
      • (alternatively) publish the RTCM data via Mavlink directly to the drone (MAVSDK).
      • would need a client that would connect to Ublox via USB and send out mavlink messages to your drone
        -- https://mavsdk.mavlink.io/main/en/cpp/api_reference/classmavsdk_1_1_rtk.html
        -- https://github.com/mavlink/MAVSDK/issues/2352

      Rover (VOXL2)

      • wifi connection to the base station server
      • based, on the following docs (https://docs.px4.io/main/en/advanced/rtk_gps) the PX4 gps driver will automatically forward RTCM packets to the attached GPS receiver. The Px4 GPS driver will need to receive the Mavlink GPS_RTCM_DATA packet.
      • if you use the NTRIP caster in the step above, you could use a NTRIP client and have the client publish the mavlink GPS_RTCM_DATA message locally on VOXL2 .

      I will ask around to see if we have a better documentation for this..

      Alex

      posted in VOXL 2
      Alex KushleyevA
      Alex Kushleyev
    • RE: tracking down pipe switching to images of traccking front camera

      @mark , we have seen this issue when using 10- or 12-bit drivers for the AR0144 tracking cameras. These drivers are shipped with the SDK but are not stable (for this exact reason).

      Can you please confirm which ar0144 sensormodules you are using - you can check in /usr/lib/camera.

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: Minimizing voxl-camera-server CPU usage in SDK1.6

      Hi @Rowan-Dempster,

      We have been looking at some optimizations to help reduce the overall cpu usage of the camera server (not yet in the SDK). Let me test your exact use case and see what can be done.

      Just FYI, we recently added support for sharing ION buffers directly from camera server, which means the camera clients get the images using zero-copy approach. this allows to save the cpu cycles wasted on sending the image bytes over the pipe, especially when there are multiple clients.

      If you would like to learn more how to use the ION buffers, I can post some examples soon. One the client side, the API for receiving and ION buffer vs regular buffer is almost the same. One thing that will be different is that the shared ION buffer has to be released by all clients before it can be re-used by the camera server (which makes sense).

      Even without the ION buffer sharing there is room to reduce the cpu usage, so I will follow up after testing this a bit. Regarding your question whether sending the image to multiple clients should not cause significant extra cpu usage -- yes you are correct, ideally it should not. However, the reason why it is happening here is related to how we set up ION buffer cache policy and currently when the CPU accesses the buffers for the misp_norm images (coming from the gpu), the cpu reads are not cached and the read access is expensive. Reading the same buffer multiple times results in repeated CPU-RAM access (for the data that would normally be already fully cached after the first send to the client pipe). However, in some other cases (when the buffer is not used by the cpu, but is shared as ION buffer and client sends the buffer directly to GPU), this approach results in even lower CPU usage. So i think we need to resolve the buffer cache policy based on the use case.. More details will come soon..

      Alex

      posted in Video and Image Sensors
      Alex KushleyevA
      Alex Kushleyev
    • RE: About the usage of CPU core

      @Seungjae-Baek , the resolution of the camera should depend on the use case. However, it is also important to keep in mind what exactly you are doing with the images coming from the hires cameras. For example, if you set the resolution to 4K and you try to view those uncompressed images using voxl-portal, this process will be very taxing on the CPU for the following reason: when you view uncompressed images in voxl-portal, these images are encoded with a software JPG encoder and then sent from voxl2 to your browser. This process is very cpu-heavy. On the contrary, if you use h264 / h265 stream, it should be perfectly fine to encode 4K30 video to disk or even stream, since H264 / H265 encoding is done by a hardware encoder.

      voxl-portal, actually does support showing h264 (but not h265) 30FPS streams, so that would be a lot more efficient for the CPU, since there would be no jpg encoding. Otherwise, if you are using raw frames (not _encoded) in voxl-portal , please keep in mind that you will always have a lot of cpu overhead. voxl-portal is designed for debugging / development purposes, so it's not necessarily the most efficient solution for video streaming. For real video streaming use cases, you would use h264 or h265 encoding and save to disk on voxl2 + stream the encoded video for remote viewing. You could encode the same camera source with two different resolutions / codecs / bitrates.

      If you need help setting up a specific use case, please provide some details and I can help you further.

      Alex

      posted in VOXL 2
      Alex KushleyevA
      Alex Kushleyev
    • RE: Running M0166 on VOXL 2

      @cbay ,

      We support any camera that is compatible with UVC (plugged into a usb port). Please look into voxl-uvc-server. https://docs.modalai.com/voxl-uvc-server/

      Regarding NIR, have you considered using a regular camera (most cameras are sensitive to NIR) and install a lens with a NIR pass filter (if you dont want to see visible light)? Then you could use one of our hi-res cameras like IMX412 or IMX664. I can get you spectral response plot of those sensors if you need.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: VOXL Mini 4 in 1 ESC query

      @Aaky , 5A hover / 12A max (per motor) should be easy for M0129 in terms of current capacity. Just make sure you have some cooling. Sufficient cooling should be determined by testing and monitoring ESC temps, which are available in the ESC telemetry and PX4.

      Alex

      posted in ESCs
      Alex KushleyevA
      Alex Kushleyev
    • RE: Running M0166 on VOXL 2

      @cbay ,

      The minimal components for testing M0166 camera with VOXL2 are:

      • M0155 adapter
      • coax camera cable (should come with the camera)

      To run the solo configuration, you would need to do the following in software:

      • copy correct ar0144 sensormodule driver to /usr/lib/camera/
      • create (or copy/paste) a voxl-camera-server.conf file for that camera to /etc/modalai. The config can be copied for many of our standard configs.
        • as an example, you could run voxl-configure-cameras 26 to use our standard C26 config (see here : https://docs.modalai.com/voxl2-coax-camera-bundles/) and then just disable / remove (in the generated config file) the cameras that are not actually present.

      For testing 1-3 M0166 cameras,

      • option 1: 1-3 M0155 (unsync'ed), plugged into VOXL2 J6, J7, J8
      • option 2: 1 M0173 (synchronized or unsynchronized), M0173 plugs into VOXL2 J6, J7.

      Please take a look at this post, you will find some useful information about customization : https://forum.modalai.com/topic/4491/ar0144-rgb-output-on-voxl2 (even though your question is not related to a color version of M0166).

      If you are using VOXL2 mini, you can still use two M0155. If you wanted 2-3 sync'ed cameras, you need to use either M0188 or M0195 (both of which are for VOXL2 mini only, just like M0173 is for VOXL2 only).

      There are lots of options.. depending on the path you take, we can help you get the software set up.

      Alex

      posted in Ask your questions right here!
      Alex KushleyevA
      Alex Kushleyev
    • RE: Camera Calibration issues

      @taiwohazeez , the image is indeed blurry. Did the camera come like this or did something happen to the camera / lens?

      Also, i am assuming that the blur is not coming from the camera moving around (that is to say the camera is stationary during those image captures).

      Please inspect the lens on the outside to make sure it's not scratched or damaged, as this can also contribute to the issue.

      In any case, adjusting focus on a camera is usually as simple as turning the lens slightly CW or CCW. Since the image is almost focus, i expect that the adjustment would be within 1 full turn from the existing lens position in the lens holder. If the camera lens is being held by a plastic mount, it should have a screw on the side of the plastic mount, which clamps the mount around the lens. The screw needs to be loosened and then you can use the focus tool to turn the lens slightly while looking at the image view in voxl-portal. For this particular camera with a wide lens, the hyperfocal distance will be around 10cm, meaning that when properly focused, everything beyond 10cm away should appear in focus. With this in mind, you should some target (checkerboard is fine) 0.3 - 0.5m away and turn the lens back and forth (with the plastic tool) to try achieve the best focus. If you cannot achieve good focus (the best focus is still blurry), it probably means that either the lens is damaged dirty (front or back side of the lens).

      If you do unscrew the lens all the way out (by turning counter-clockwise until the lens comes out), don't leave the camera exposed like this for a long period of time, so that dust does not get on the camera sensor.

      Also, what is the exposure and gain reported in the stats below the image in the voxl-portal view, when you view the camera stream (without the calibration). The environment seems pretty dark, i would recommend more lighting, but let's double check the reported exposure and gain (from auto exposure). Also, please make sure your checkerboard pattern is as flat as possible (this is unrelated to blurry image, but will result in better calibration results).

      Please try focusing the camera and let me know if you have any questions.

      Alex

      posted in Support Request Format for Best Results
      Alex KushleyevA
      Alex Kushleyev