Using the Vicon Motion Capture System to Estimate Position and Heading
-
Re: Using Motion Capture Systems (Optitrack) for Position Estimation
This post is a follow-up to the forum post mentioned above. I want to estimate the position and heading of an m500 (with a VOXL computer) using Vicon cameras. I describe my setup and issues below.
Setup:
- A set of Vicon cameras.
- A host computer that runs a custom docker that collects Vicon pose messages and then publishes those messages via ROS. This computer is also my ROS master. This computer also runs QGroundControl.
- A VOXL drone that runs the
roskinetic-xenial:v1.0docker image, has therun_mavros.shscript from Modal AI, and knows that the host computer is the ROS master. - In addition, I installed
mavros-extrasin theroskinetic-xenial:v1.0docker to access the/mavros/odometry/outand/mavros/odometry/intopics.
The VOXL drone runs in
stationmode and is connected to my WiFi. The drone can runrostopic echo <topic_name>and see the messages published over the network, including the messages from the docker that publishes the Vicon poses via ROS. QGroundControl also sees the drone whenever the drone is running.Issues:
-
What EKF2 parameters should I change so the drone uses the Vicon system? I saw another mention that I should set
EKF2_AID_MASK = 24, which includesvision position fusionandvision yaw fusion. Is this configuration correct? Do I need to set any other parameters? -
What VOXL parameter do I need to set (if any) to tell VOXL to use a motion capture system instead of the onboard VIO?
-
PX4 (on the drone) gives me an error when I publish the odometry messages from the Vicon system on the topic
/mavros/odometry/out. They mainly relate to the frame IDs used. What should I set for theframe_idand thechild_frame_idin the odometry messages? -
How can I test if PX4 receives the odometry messages via ROS?
@DarkKnightCH and @IC, were you able to solve the issues you were experiencing in the post referenced above?