ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. paul.foley
    3. Topics
    P
    • Profile
    • Following 0
    • Followers 0
    • Topics 4
    • Posts 5
    • Best 0
    • Controversial 0
    • Groups 0

    Topics created by paul.foley

    • P

      camera_image_metadata_t framerate is zero from lepton0_color

      Ask your questions right here!
      • • • paul.foley
      2
      0
      Votes
      2
      Posts
      24
      Views

      Alex KushleyevA

      @paul-foley , it looks like our voxl-lepton-server is not setting the FPS field in the metadata : https://gitlab.com/voxl-public/voxl-sdk/services/voxl-lepton-server/-/blob/master/src/publisher.c?ref_type=heads#L338 . I do see above that we do set the framerate in json info about the output pipe (cJSON_AddNumberToObject(json, "framerate", 9);) and the framerate is hardcoded to 9, since i guess all lepton sensors are running at 9fps.

      I just made the change to publish the fps : https://gitlab.com/voxl-public/voxl-sdk/services/voxl-lepton-server/-/commit/110b9cce4c021322566e46f36d6878fca55f7d26

      You are using an older voxl SDK, so installing latest nightly package (from http://voxl-packages.modalai.com/dists/qrb5165/dev/binary-arm64/ will probably not work). So you probably need to take the source of the version you are running now (1.3.3 : https://gitlab.com/voxl-public/voxl-sdk/services/voxl-lepton-server/-/tree/v1.3.3?ref_type=tags) and apply the same change in the source code and build it.

      Since i just pushed to dev branch, a new version of leptop server should appear in our package repo tomorrow, and you can download and try it. it might work, but your SDK is 2 major versions down from where we are now (1.4.x vs 1.6.x), so you may want to just build the patched version 1.3.3 :

      git clone https://gitlab.com/voxl-public/voxl-sdk/services/voxl-lepton-server.git cd voxl-lepton-server git checkout v1.3.3 #make the change in publisher.c to include fps ... ... #start voxl-cross docker (you can download it in our downloads section: https://developer.modalai.com/asset) docker run -it --rm -v `pwd`:/opt/code -w /opt/code voxl-cross:V4.6 bash ./install_build_deps.sh qrb5165 sdk-1.4 ./build.sh qrb5165 ./make_package.sh #then, outside docker, deploy the new package to voxl2 via adb ./deploy_to_voxl.sh

      Please let me know if this works for you!

      Alex

    • P

      voxl-camera-server failed to set pipe size: Cannot allocate memory

      Ask your questions right here!
      • • • paul.foley
      3
      0
      Votes
      3
      Posts
      217
      Views

      Alex KushleyevA

      Hi @Riccardo-Benedetti and @paul-foley ,

      This issue is not common. There is a potential condition (which we are going to look into) when a MPA client crashes and does not close the pipes properly, the MPA server side will keep the allocated resources open. If the client crashes repeatedly, eventually there will be no more memory for the Kernel to allocate for the new pipes.

      Please check to make sure that you have no client processes that are subscribing to image streams and are crashing and restarting (perhaps restarting automatically by systemd). It is possible that a process that is not your test app is misbehaving for some reason and is causing this memory leak by continuous crashing and re-starting.

      Also, what if you just use voxl-inspect-cam to inspect the single camera stream and no other camera clients running, does the same issue happen?

      Alex

    • P

      unable to build voxl-tflite-server in voxl-cross

      Support Request Format for Best Results
      • • • paul.foley
      3
      0
      Votes
      3
      Posts
      325
      Views

      P

      @tom that worked, thank you!

    • P

      Imagery collected on ModalAI drones

      Ask your questions right here!
      • • • paul.foley
      2
      0
      Votes
      2
      Posts
      253
      Views

      Alex KushleyevA

      @paul-foley , we do not have a repository for images.

      My suggestions would be:

      try using training sets from other sources (as you have been trying), but I don't have a specific reference in order to better match the 3rd party datasets to your use case, you should try to do the following: transform the 3rd party images to make them look more similar to what you are looking for (crop, zoom, resize, adjust color, etc). This can be automated transform the images collected on Starling to some standard, for example you can perform fisheye un-distortion to make them look more like the images taken from non-fisheye cameras from your training set, similarly adjust color if needed (white balance).

      In general, if you can somehow standardize your images, and convert the 3rd party images to the same standard, you can greatly benefit from the large data sets that you could find online. The only down side that when running the model online, you will need to transform your images in real time, but that should not be a big deal. If you need to, I can help you with some GPU-based examples to do image transformation very quickly, but you should first make sure that this approach will work for you first (and optimize later 🙂 )

      I hope this helps..

      Alex