ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Best Strategy for Injecting Photorealistic Rendering Outputs into VOXL2 Camera Pipeline in Gazebo HITL

    VOXL SDK
    2
    2
    15
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Ege YüceelE
      Ege Yüceel
      last edited by Ege Yüceel

      Hi ModalAI community,

      I'm working on a Gazebo HITL setup with VOXL2 and need to inject photorealistic rendering (GSplat, NeRF) outputs into the camera pipeline. I'm trying to figure out the best approach and would appreciate your input. I have followed the guide https://docs.modalai.com/voxl2-PX4-hitl/#hitl. To modify the existing docker contents, I copy a part of the file directory (/usr/workspace/voxl2_hitl_gazebo) outside the container. I do the necessary file edits there and initialize the docker with that directory mounted to the container and also compile/build the code changes within the docker.

      My Setup:

      • VOXL2 running HITL with Gazebo
      • Current world: modalai.world with iris_hitl model
      • Photorealistic rendering environment where you give a pose and get an image back

      Goal: VOXL2 should use these rendered images as its camera output for VIO and other services

      Current Understanding:

      -VOXL2 has built-in VIO that processes camera images
      -Current modalai.world has no camera sensor
      -I want to bypass camera rendering and inject photorealistic images directly
      -Need images to be available for multiple services (VIO, voxl-tflite-service, etc.)

      My Questions:

      1. Should I add a camera sensor to the world and attach a custom plugin? Or is there a cleaner way to inject images directly into the camera pipeline?
      2. If I inject rendered images directly into a camera plugin (modifying or creating a custom plugin), will VOXL2 still perform proper VIO? Or does it need actual camera sensor data?
      3. What's the minimal change needed? I want to avoid dealing with camera intrinsics, distortion models, and rendering pipelines.
      4. How do I make the injected images available to multiple VOXL2 services? I need VIO, voxl-tflite-service, and potentially other services to access the same camera feed.
      5. What is the specific plugin at /usr/workspace/voxl2_hitl_gazebo/build/ I need to modify for this?
      Eric KatzfeyE 1 Reply Last reply Reply Quote 0
      • Eric KatzfeyE
        Eric Katzfey ModalAI Team @Ege Yüceel
        last edited by

        @Ege-Yüceel Wow, this is a very ambitious plan! Keep in mind that VIO needs not only camera images but also IMU input. Camera images are normally supplied by the voxl-camera-server on MPA pipes. So, you would need to stop voxl-camera-server and start your own application that takes the images from Gazebo and places them into correctly named MPA pipes. The images would have to be formatted properly, etc. Likewise for IMU samples that normally come from voxl-imu-server. I think this will be quite difficult to get working properly but theoretically possible.

        1 Reply Last reply Reply Quote 0
        • First post
          Last post
        Powered by NodeBB | Contributors