ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Issues with custom Yolov8

    VOXL 2 Mini
    yolov8 deep learning voxl 2 mini custom models
    2
    10
    63
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      jeremyrbrown5
      last edited by

      Hello

      We are training a yolov8 model to recognize rasberry pis. We are able to create the .tflite, test it using the predict function with 100% detection and high certainty, and push it to the Voxl 2 mini. However, no matter how we configure the voxl-tflite-server, we can't see it replicated in the voxl portal.

      Some notes about our setup

      For training we are using 75 images. 50 are for training, 15 are for valid, and 10 are for test

      Our model-train.yaml is set to 30 epochs, 5 batchs, and 2 workers

      our custom labels.txt for the tflite server is just 0 RasPi

      we have used both the tracking_front and the Hires_misp_color cameras, with no such luck.

      Inside of the deep learning documentation there is a section about the initialize_model_settings function in the main file. We cannot find this function on the voxl 2 mini. If the steps under 'Implementing your model in voxl-tflite-server' and 'Writing the model class' are critical to our success, we would really appreciate the function/file locations for the voxl 2 mini.

      Thank you

      1 Reply Last reply Reply Quote 0
      • Zachary Lowell 0Z
        Zachary Lowell 0 ModalAI Team
        last edited by

        @jeremyrbrown5 I will plan on testing this out tomorrow - how are you validatign that it is detecting the rpis? Are you running voxl-inspect-detections tflite_data -a?

        J 1 Reply Last reply Reply Quote 0
        • J
          jeremyrbrown5 @Zachary Lowell 0
          last edited by

          @Zachary-Lowell-0 we are using the yolo predict function inside the voxl-docker. I didn't know about voxl-inspect-detections, so I'll try that.

          1 Reply Last reply Reply Quote 0
          • Zachary Lowell 0Z
            Zachary Lowell 0 ModalAI Team
            last edited by

            @jeremyrbrown5 said in Issues with custom Yolov8:

            @Zachary-Lowell-0 we are using the yolo predict function inside the voxl-docker. I didn't know about voxl-inspect-detections, so I'll try that.

            Reply

            If you are running your model directly on voxl-tflite-server then you can leverage the voxl sdk to detect any outputs from the model. That SDK is what is showing the image on voxl-portal. My guess is since this images arent showing on voxl-portal then you are having an issue during startup.

            Can you run voxl-tflite-server directly on the command line and paste the output in here?

            J 1 Reply Last reply Reply Quote 0
            • Zachary Lowell 0Z
              Zachary Lowell 0 ModalAI Team
              last edited by

              @jeremyrbrown5 if you paste your model and upload it here i can download it and help troubleshoot the issue you are running into

              1 Reply Last reply Reply Quote 0
              • Zachary Lowell 0Z
                Zachary Lowell 0 ModalAI Team
                last edited by

                https://gitlab.com/voxl-public/support/voxl-train-yolov8

                I am assuming you followed this instruction set for training your model?

                J 1 Reply Last reply Reply Quote 0
                • J
                  jeremyrbrown5 @Zachary Lowell 0
                  last edited by

                  @Zachary-Lowell-0

                  stinger (D0013):/$ voxl-tflite0-server
                  bash: voxl-tflite0-server: command not found
                  stinger (D0013):/$ voxl-tflite-server

                  skip_n_frames: 5

                  model: /usr/bin/dnn/best_float16.tflite

                  input_pipe: /run/mpa/hires_misp_color/

                  delegate: gpu

                  allow_multiple: false

                  output_pipe_prefix: mobilenet

                  existing instance of voxl-tflite-server found, attempting to stop it
                  WARNING: Unknown model type provided! Defaulting post-process to object detection.
                  INFO: Created TensorFlow Lite delegate for GPU.

                  1 Reply Last reply Reply Quote 0
                  • J
                    jeremyrbrown5 @Zachary Lowell 0
                    last edited by

                    @Zachary-Lowell-0 Yes I have been using the guides linked on the modalai website

                    J 1 Reply Last reply Reply Quote 0
                    • J
                      jeremyrbrown5 @jeremyrbrown5
                      last edited by

                      @jeremyrbrown5 here is a link to our current model .tflite file

                      J 1 Reply Last reply Reply Quote 0
                      • J
                        jeremyrbrown5 @jeremyrbrown5
                        last edited by

                        @jeremyrbrown5 we figured it out, we just changed the name of the model to yolov8n_float16.tflite. Pretty spot on now!

                        1 Reply Last reply Reply Quote 0
                        • First post
                          Last post
                        Powered by NodeBB | Contributors