ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Home
    2. Denver Bennett
    3. Topics
    D
    • Profile
    • Following 0
    • Followers 0
    • Topics 11
    • Posts 18
    • Best 1
    • Controversial 0
    • Groups 0

    Topics created by Denver Bennett

    • D

      tflite-server with custom model?

      Ask your questions right here!
      • • • Denver Bennett
      3
      0
      Votes
      3
      Posts
      153
      Views

      D

      Hello all,

      I was able to successfully train a MobileNet v2 (object detection mode), quantize it, and convert it into .tflite format.

      Basically, my custom model had to meet 3 criteria:

      The model is not too large (models should probably be below 17 KB) The model's input shape is [1 300 300 3] and the input data type is uint8 or numpy.uint8 The model's has four outputs in THIS ORDER: output one shape: [1 10 4] output two shape: [1 10] output three shape: [1 10] output four shape: [1]
      all of these outputs in float32 or numpy.float32 format

      From what I can best understand, the model architecture doesn't have to be an exact match, so long as the inputs and outputs are compatible.

      Here is my code:

      import tensorflow as tf
      from tensorflow.keras import layers, models

      Load the MobileNetV2 feature vector model directly from TensorFlow

      base_model = tf.keras.applications.MobileNetV2(
      input_shape=(300, 300, 3), # Use 300x300 input shape as required
      include_top=False,
      weights='imagenet')

      Freezing the base model

      base_model.trainable = False

      Adjust input shape to 300x300x3 and use uint8 data type

      inputs = tf.keras.Input(shape=(300, 300, 3), dtype='uint8')

      Use Lambda layer to cast inputs to float32

      x = layers.Lambda(lambda image: tf.cast(image, tf.float32))(inputs)

      Pass the cast inputs through the base model

      x = base_model(x)
      x = layers.GlobalAveragePooling2D()(x)
      x = layers.Dense(1280, activation='relu')(x)

      Bounding box output (10 detections, 4 coordinates each)

      bbox_outputs = layers.Dense(40, activation='sigmoid')(x)
      bbox_outputs = layers.Lambda(lambda t: tf.reshape(t, [1, 10, 4]), name="bbox_outputs")(bbox_outputs)

      Class ID output (10 detections)

      class_outputs = layers.Dense(10, activation='softmax')(x)
      class_outputs = layers.Lambda(lambda t: tf.reshape(t, [1, 10]), name="class_outputs")(class_outputs)

      Confidence score output (10 detections)

      confidence_outputs = layers.Dense(10, activation='sigmoid')(x)
      confidence_outputs = layers.Lambda(lambda t: tf.reshape(t, [1, 10]), name="confidence_outputs")(confidence_outputs)

      Number of detections (single value)

      num_detections = layers.Lambda(lambda t: tf.constant([10], dtype=tf.float32))(x)
      num_detections = layers.Lambda(lambda t: tf.reshape(t, [1]), name="num_detections")(num_detections)

      Define the outputs explicitly in the order you want: 1. Bounding boxes 2. Class IDs 3. Confidence scores 4. Number of detections

      model = tf.keras.Model(inputs, [bbox_outputs, class_outputs, confidence_outputs, num_detections])

      Compile the model

      model.compile(optimizer='adam', loss='mean_squared_error')

      Define a ConcreteFunction for the model with explicit output signatures

      @tf.function(input_signature=[tf.TensorSpec([1, 300, 300, 3], tf.uint8)])
      def model_signature(input_tensor):
      outputs = model(input_tensor)
      return {
      'bbox_outputs': outputs[0],
      'class_outputs': outputs[1],
      'confidence_outputs': outputs[2],
      'num_detections': outputs[3]
      }

      Convert the model to TensorFlow Lite using signatures

      converter = tf.lite.TFLiteConverter.from_concrete_functions([model_signature.get_concrete_function()])

      Apply float16 quantization

      converter.optimizations = [tf.lite.Optimize.DEFAULT]
      converter.target_spec.supported_types = [tf.float16] # Use float16 quantization

      Convert the model

      tflite_model = converter.convert()

      Save the TensorFlow Lite model

      with open('/content/mobilenet_v2_custom_quantized.tflite', 'wb') as f:
      f.write(tflite_model)

      Load the TFLite model and check the input/output details to confirm correct mapping

      interpreter = tf.lite.Interpreter(model_content=tflite_model)
      interpreter.allocate_tensors()

      Get the input and output details to verify correct input/output structure

      input_details = interpreter.get_input_details()
      output_details = interpreter.get_output_details()

      print("Input Details:", input_details)
      print("Output Details:", output_details)

    • D

      How do I connect MAVROS to PX4 with UDP ports?

      Ask your questions right here!
      • • • Denver Bennett
      1
      0
      Votes
      1
      Posts
      160
      Views

      No one has replied

    • D

      MAVLink and MAVROS NOT connecting to PX4

      Ask your questions right here!
      • • • Denver Bennett
      1
      1
      Votes
      1
      Posts
      150
      Views

      No one has replied

    • D

      Issues with facilitating MAVROS communication between Jetson Orin and VOXL2

      Ask your questions right here!
      • • • Denver Bennett
      1
      0
      Votes
      1
      Posts
      158
      Views

      No one has replied

    • D

      Voxl2 SDK 1.3.0

      Ask your questions right here!
      • • • Denver Bennett
      3
      0
      Votes
      3
      Posts
      243
      Views

      Alex KushleyevA

      @Denver-Bennett , sorry for the delay.

      Have you followed our documentation on this topic? https://docs.modalai.com/voxl2-io-user-guide/

      The firmware needs to be updated, if it has not been done when you installed SDK 1.3.0. The following docs page tells you how to check and install the firmware : https://docs.modalai.com/voxl2-io-firmware/.

      The voxl2-io package should already be installed on voxl2 with SDK 1.3.0, so you should be able to use commands like:

      voxl-2-io scan voxl-2-io upgrade_firmware

      (here is the actual script : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl2-io/-/blob/master/scripts/voxl-2-io?ref_type=heads)

    • D

      MAVRS: offboard mode issues, ./run_mavros_test.sh

      ROS
      • • • Denver Bennett
      2
      0
      Votes
      2
      Posts
      160
      Views

      Kashish GargK

      @Denver-Bennett Were you able to find a solution to this? same issue on my end

    • D

      MAVROS: `waiting for offboard mode` still loading

      ROS
      • • • Denver Bennett
      2
      0
      Votes
      2
      Posts
      191
      Views

      Kashish GargK

      @Denver-Bennett were you able to find a solution to this? same issue on our end

    • D

      Output based on VOXL tflite live footage/bounding boxes

      Ask your questions right here!
      • tflite tflite-server mobilenet • • Denver Bennett
      2
      0
      Votes
      2
      Posts
      164
      Views

      ?

      @Denver-Bennett

      Hey Denver, happy to try and help you out with this! Where you should start is our documentation for voxl-tflite-server which is how we often run ML models on VOXL. What you'll see from those docs is that we have a bunch of pretrained models already onboard as well as a bit of support for custom models. voxl-tflite-server then writes AI predictions out to a pipe using libmodal-pipe. However libmodal-pipe currently only has a C/C++ API and so you'd have to make a simple piece of C code to consume data from the pipe and invoke your Python script accordingly.

      I'm more than happy to help with this, but it would be good to do some reading first. I'd check out that documentation link above as well as the one on the Modal Pipe Architecture to learn how processes communicate with each other. From there I can start to answer more specific questions and help you build this out.

      Hope this helps!

      Thomas Patton

    • D

      Automatic syncing object models across multiple VOXL2s

      Ask your questions right here!
      • • • Denver Bennett
      4
      0
      Votes
      4
      Posts
      240
      Views

      ModeratorM

      @Denver-Bennett Hi Denver, no voxl-tflite-server is a TensorFlow Lite runtime environment for VOXL.

      You will need to write your own code if you want to somehow mesh models together over a network. We aren't familiar with any software that handles that today

    • D

      Doodle Labs Mini Integration with VOXL 2

      Cellular Modems
      • • • Denver Bennett
      7
      0
      Votes
      7
      Posts
      368
      Views

      VinnyV

      @Denver-Bennett Great, glad to hear that helped!

    • D

      S1000+ integration

      Ask your questions right here!
      • • • Denver Bennett
      5
      0
      Votes
      5
      Posts
      317
      Views

      D

      @Alex-Kushleyev Thank you! This worked perfectly, got all of the motors spinning 🙂