https://gitlab.com/voxl-public/support/voxl-train-yolov8
I am assuming you followed this instruction set for training your model?
https://gitlab.com/voxl-public/support/voxl-train-yolov8
I am assuming you followed this instruction set for training your model?
@jeremyrbrown5 if you paste your model and upload it here i can download it and help troubleshoot the issue you are running into
@Jakub-Tvrz said in Starling 2 Max for indoor:
We haven't bought anything yet, but we are deciding what will be the best for our warehouse inventory application. Of course, we considered the ToF option. But you still say: "however, it is not... great" - does that mean that Starling 2 is better for our application? Or does Dual vs Triple camera for Visual Inertial Odometry also have an impact?
Reply
Depends fully on your use case. I will say that PX4's instance of obstacle avoidance is not fantastic, but it is better than nothing. In my opinion I would go w/ the model that has all the bells and whistles in case you need the hardware and decide to develop your own collision avoidance or somethign of that nature.
@jeremyrbrown5 said in Issues with custom Yolov8:
@Zachary-Lowell-0 we are using the yolo predict function inside the voxl-docker. I didn't know about voxl-inspect-detections, so I'll try that.
Reply
If you are running your model directly on voxl-tflite-server then you can leverage the voxl sdk to detect any outputs from the model. That SDK is what is showing the image on voxl-portal. My guess is since this images arent showing on voxl-portal then you are having an issue during startup.
Can you run voxl-tflite-server directly on the command line and paste the output in here?
@jeremyrbrown5 I will plan on testing this out tomorrow - how are you validatign that it is detecting the rpis? Are you running voxl-inspect-detections tflite_data -a?
You will need the SKU supporting the TOF to accurately run current versions of voxl-mapper and VOA.
Zach
The system has an HDMI out so you can plug it into an external monitor yes:
@Ido-Goldstein to run yolov8 run voxl-configure-tflite-server and then when it prompts you to select your model, you can select yolov8. YOLOV11 is also currently supported however is still CPU bound so I would recommend using yolov8 for GPU processing
@Jakub-Tvrz said in Starling 2 Max for indoor:
● Next Generation PMD ToF image sensor for obstacle avoidance and 3D mapping
@Jakub-Tvrz if you purchased the starling 2 Max version that comes with the PMD TOF then you will be capable of running PX4's instance of visual obstacle avoidance - however, it is not... great.
● Next Generation PMD ToF image sensor for obstacle avoidance and 3D mapping
https://www.modalai.com/collections/all/products/starling-2-max?variant=49129974169904 <-- this the version you will had to buy for the TOF sensor
@taiwohazeez unsure if I am following this "hover mode" are you attempting to put the drone into HOLD mode? If so HOLD mode on base PX4 isnt supported without a GPS hold, so if you are flying indoors, there is a high likely hood that the GPS is relaying poor data causing the drone to drift a bit. I would recommend in QGC to turn off the GPS via the parameters and instead of going into hold mode, just staying in position mode while letting go of the sticks and see if it holds then
Zach
@all - you can just change the SYS_ID of each drone and this should solve the port confliction.
@tomverstappen I am unable to recreate the issue with a starling running 1.5.0 - the output from voxl-vision-hub shows the relative coordinate frame from the tag relative to the camera and was able to get the drone to offset correctly.
Are there any logs you can paste in here or video?
Hello All - I will be trying to recraete this issue in the coming days - will keep you all posted.
Detections is pertinent to voxl-tflite-server and onboard computer vision on yolo, NOT on april tags.
@taiwohazeez I am not fully convinced you are running the right command still
Running voxl-inspect-tag shoudl show the following headers:
ID | XYZ | RPY | SIZE_M | CAM
As shown in the code below:
printf("\n"); printf("id:%3d name: %s\n", d.id, d.name); printf("XYZ: %6.2f %6.2f %6.2f \n", (double)d.T_tag_wrt_cam[0], (double)d.T_tag_wrt_cam[1], (double)d.T_tag_wrt_cam[2]); printf("RPY: %6.1f %6.1f %6.1f \n", (double)roll, (double)pitch, (double)yaw); printf("size_m: %4.2f latency_ms: %6.2f\n", (double)d.size_m, latency_ms); printf("cam: %s type: %s\n", d.cam, pipe_tag_location_type_to_string(d.loc_type)); printf("\n");
Just run voxl-inspect-tags and that will output the right data.
Zach
Class confidence and detection confidence are pertinent towards voxl tflite server not voxl-tag-detector - are you sure this is the right output?
Hi @taiwohazeez - can you paste the output? Usually you get NANS or unknown when the locations.txt file isnt filled out meaning the tag detector service is referencing an empty file. Once you paste the output of voxl-inspect-detections I will be able to give you better guidance.
Thanks!
Zach
@Evan-Hertafeld said in Using MPA within voxl-docker-mavsdk-cpp:
libmodal_pipe
You need to share all those files via a volume mount alongside with the /run/mpa directory to allow docker to probe the data and grab from an MPA pipe.
so in your docker run something like -v /usr/include/modal_start_stop.h:/usr/include/modal_start_stop.h needs to be done - the other option is you just download the actual MPA debian inside your docker image and call it a day - then you will have all the necessary headers, libs, binaries, and you just need to share /run/mpa as a volume to access the data.
Zach
@Jetson-Nano you could just use the CPP mavsdk instance to grab the TOF data from a MPA pipe and then leverage that to do your controls via mavsdk (I am assuming just offboard movement). You can connect over port 14551 just ensure you have localhost comms set to true in voxl-vision-hub.
@nikolahm is your drone PID tuned? How are you flying in position mode? GPS based flight, VIO based flight, etc. What is the environment you are testing in, can you paste a log in here please?