@tiralonghipol and community
Here it is,
https://github.com/zauberflote1/voxlSyncVINS
zauberflote1
@zauberflote1
I like drones, VINS, C++, and Lie Theory. The rest is history...
Best posts made by zauberflote1
-
voxl-open-vins-server How to Use, Overall Questions, ROS/ROS2 Findings
Hello ModalAI and Community,
I have been trying to integrate a moded version of Open Vins and my Sentinel Drone for the past week through ROS or ROS2; however, I was met with extreme difficulties during calibration when inspecting the ROS bags, which I believe are due to the mpa-to-ros and mpa-to-ros2 logic. (For the curious ones, I'll write the code segment below and briefly discuss why this "breaks" the bag. If you're not a developer, this may not be useful. In that case, just scroll down)
mpa-to-ros2
Timestamps (stereo_interface.cpp example)imgL.header.stamp.nanosec = meta.timestamp_ns;
It is not ideal to use monotonic clock time here due to the jumps. There's a need to apply a similar solution to the one used for the mpa-to-ros node. Also, one should "repeat" the logic done for the IMU_Server to fix the timestamps, i.e., discard messages if, after the conversion to ROS2 time, timestamp<timestamp prev. (Go horse here is to timestamp with the current ROS2 time, but your VINS will not like that...) Note that the information for stereo images comes with the two images already in the same "meta," so there's no need to repeat the logic done in the camera_server for time-sync the cameras. Note 2: You'll need to convert time in all interfaces.
Interface (interface_manager.cpp)
(This might be intended behavior, but it definitely is not suitable for VINS...)If you preview the ROS2 bag, you'll quickly notice that stereo images are combined into a single image. Depending on your application, this may be good or bad, but if you're trying to calibrate your sensors for optimal accuracy using Kalibr, this won't make the cut + most open-source codes consider two image topics. Below is how I fixed this issue...
In all_interfaces.h
#include "voxl_mpa_to_ros2/interfaces/stereo_interface.h"
in interface_manager.cpp
copy a similar logic done in mpa_to_ros, in the function findPipes, if curType == INT_CAMERA in the switch, add an if statement for covering the case newNode->name is equal to "stereo," then newNode->interface = new StereoInterface(nh, newNode->name);case INT_CAMERA: if(strstr(newNode->name, "stereo")){ // can just use name var because it's already copied but for explicit understanding newNode->interface = new StereoInterface(nh, newNode->name); } else { newNode->interface = new CameraInterface(nh, newNode->name); } break;
There are some extra steps needed for the perfect break, including at the camera_server level, because each image timestamp should be different as per the camera_server logic. This is actually important for calibration proposes.
Subscriber count and ThreadManageInterfaces
IMO, this is why users can't see ROS2 topics on their host computer. The subscriber count is suitable for constrained resource environments and for someone who doesn't need messages apart from logging, but this is currently a problem if you plan on testing an algorithm for navigation, control, or guidance.ThreadManageInterfaces sleeps for 1s every time it is invoked, and if the subscriber count misses your subscriber during a dt between messages, you'll get a jump in timestamps. Another consequence of this implementation often happens if you try to echo image topics in a host computer, which results in being unable to receive them. Note that you can still receive smaller messages, like IMU. Ideally, there's probably a way to fix it using the subscriber count, but a quick and dirty solution is constantly publishing the topics you'll need. This will save you time and keep the sampling frequency somewhat constant. Hopefully, ModalAI will be able to access this in the future.
Please note that there may be other reasons why image topics are not being received in your host computer, ranging from QOS settings, env setup, publisher rate, etc.
(In short, if you create a simple talker ROS2 node on VOXL2 and use a listener node on your host PC, and can successfully receive messages, your problem is likely partially due to what I described)
mpa-to-ros
Timestamps
One needs to apply the same approach as for ROS2 apart from time conversion. Also, note that the message construction in one of the latest commits forces the image on the left to have the same timestamp as the image on the right. As I mentioned before, this is incorrect behavior from the standpoint of VINS, as one can use calibration tools to find the average offset rather than artificially forcing them to be the same.Subscriber Count and Interface
Here, the problem is not as bad as in ROS2, but I recommend taking the same approach with regard to constantly publishing the topics you need. Potentially create args or a launch file to pass the values of the topics you need.The above steps are a simplified approach to "fix" the issues I described. If you have a better solution or another alternative approach, please share. I hope we all can gain from this.
voxl-open-vins-server
When I updated to the latest SDK, I noticed the server came with. This was a pleasant surprise as I was going through this battle to implement a similar architecture. I noticed that the calibration files it uses are the openCV ones, but I couldn't find the IMU calibration files. In any case, I would like to know where I can find the source code for the voxl-open-vins-server so I can start modifying it to leverage both the MPA and external config files like Kalibr yamls.
My apologies if this post was too extensive
--ZBFT
-
Learning how to lock a thread to a CPU core
Hello,
If you are developing your own custom pipelines, applications, and modifying existing source code from ModalAI, this may be applicable to you.Motivation: Control which CPU core is responsible for a certain thread in your custom application.
Background: It all started when I was observing the htop behavior. Sometimes the dynamic core allocation, overloads a particular core instead of keeping the effort evenly distributed, potentially causing issues if you have a "real-time" sensitive application. -- Assuming you have already set the priority and scheduler properly...
Tackling the issue: If you look in the qvio-server source files, you will find that a particular computer vision thread is locked to CPU core 7 -- known to be the fastest VOXL2 core. Then, what if we use the same idea in a custom application (potentially for a different core), would it work? YES.
How to do it:
PS1: I rather use ROS over modalpipes, so the full example linked is a for a modified IMU server that controls the driver entirely via ROS, but you can do the same for a modalpipe implementation WLOG.Using the heavenly chosen language, C++, and ROS syntax
void _set_affinity_to_core_7() { cpu_set_t cpuset; pthread_t thread = pthread_self(); //SET IT UP TO CORE 7 CPU_ZERO(&cpuset); //change this to the core you want CPU_SET(7, &cpuset); if (pthread_setaffinity_np(thread, sizeof(cpu_set_t), &cpuset)) { perror("pthread_setaffinity_np"); } //CHECK IF IT WAS SET UP if (pthread_getaffinity_np(thread, sizeof(cpu_set_t), &cpuset)) { perror("pthread_getaffinity_np"); } ROS_INFO("IMU read thread is now locked to core: "); for (int j = 0; j < CPU_SETSIZE; j++) { if (CPU_ISSET(j, &cpuset)) { ROS_INFO(" %d", j); } } ROS_INFO("\n"); }
In the main() and using boost, define the lambda as follows:
std::vector<boost::thread> threads; for (int i = 0; i < 1; ++i) { if (imu_enable[i]) { //IF YOU DONT WANT TO LOCK A SPECIFIC CORE, THEN // threads.push_back(boost::thread(boost::bind(&readThreadFunc, i))); threads.push_back(boost::thread([i]() { _set_affinity_to_core_7(); readThreadFunc(i); })); } }
PS2: Full Example can be found here:
https://github.com/zauberflote1/voxlSyncVINS/blob/91d74b57cacd78db87c6bd48ac03f14ca36874ab/imu_ros/src/voxl_imu_ros/server/src/node.cpp#L43C32-L43C33Final Remarks:
Only do this if you know what you are doing as you may overload a core by accident.All the best,
ZBFT
Latest posts made by zauberflote1
-
VOXL-MAPPER WITHOUT VOXL-PORTAL
Hello ModalAI community,
Hardware: VOXL2 (Sentinel)Recently, I was trying to run a few experiments that required a conservative obstacle avoidance capability. My objective was to send relative position setpoints (drone-centered frame) to voxl-mapper; however, I noticed that the only available package that allows to pipeline setpoints to voxl-mapper is voxl-portal, and the user has to manually on click each individual setpoint on the portal site.
Note that I am not talking about sending a full trajectory, just a goal setpoint and allow voxl-mapper to find the optimal path to it
Hence, I developed what is called the mapper-commander package. Of course, implementation details will vary from application to application, but for this base case example, the package allows the user to send a square maneuver -- you can easily modify this as you wish, e.g., figure 8 maneuver.
How it works?
- Grab VIO estimates insert them in the ring buffer
- Add desired relative setpoints to current position in the ring buffer
- Send control pipe commands to plan, follow and stop following accordingly
- A separate thread controls the cmdHandler callbacks and sleeps based on the realtime clock
/** * @ Author: zauberflote1 * @ Create Time: 2024-10-06 23:20:17 * @ Modified by: zauberflote1 * @ Modified time: 2024-10-11 19:31:20 * @ Description: * PIPELINE TO VOXL-MAPPER VIA COMMANDS */ #include "mapper_cmd.hpp" MapperCmd::MapperCmd() { ch_mapper = pipe_client_get_next_available_channel(); ch_vio = pipe_client_get_next_available_channel(); pose_buffer = RC_TF_RINGBUF_INITIALIZER; rc_tf_ringbuf_alloc(&pose_buffer, 2500); pipe_client_set_simple_helper_cb(ch_vio, [](int ch, char* data, int bytes, void* context) { static_cast<MapperCmd*>(context)->_vio_helper_cb(ch, data, bytes, context); }, this); pipe_client_set_connect_cb(ch_vio, [](int ch, void* context) { static_cast<MapperCmd*>(context)->_vio_connect_cb(ch, context); }, this); pipe_client_set_disconnect_cb(ch_vio, [](int ch, void* context) { static_cast<MapperCmd*>(context)->_vio_disconnect_cb(ch, context); }, this); //SQUARE MISSION ONLY //1M SQUARE square_mission << 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, -1.0, 0.0, 0.0, 0.0, -1.0, 0.0; pts_togo = square_mission.rows(); } MapperCmd::~MapperCmd() { //CLOSE PIPES -- AND FREE BUFFERS? _cmdHandler_terminate_cb(); pipe_client_close_all(); } int MapperCmd::getRobotPose(rc_tf_t &tf_body_wrt_fixed, int64_t ts){ static int error_ctr = 0; int ret = rc_tf_ringbuf_get_tf_at_time(&pose_buffer, ts, &tf_body_wrt_fixed); if (ret < 0) { error_ctr++; if(error_ctr>5){ //fprintf(stderr, "ERROR fetching tf from tf ringbuffer\n"); if (ret == -2){ printf("INFO waiting for VIO to start\n"); error_ctr = 0; } if (ret == -3){ printf("WARNING the requested timestamp was too new, VIO may have stopped\n"); error_ctr = 0; } if (ret == -4){ printf("WARNING the requested timestamp was too old\n"); } } return -1; } return 0; } int MapperCmd::sendPlanCommand(Eigen::Vector3d goal_vtf) { char command[100]; //DO NOT STOP FOLLOWING IF THIS IS THE FIRST ITERATION -- MAY CHANGE THIS IN THE FUTURE... if (goal_plan){ //CHECK POTENTIAL RACE CONDITION snprintf(command, sizeof(command), STOP_FOLLOWING); // STOP_FOLLOWING = "stop_following" if (pipe_client_send_control_cmd(ch_mapper, command) == 0) { std::cout << "Sent STOP_FOLLOWING command: " << command << std::endl; } else { std::cerr << "Failed to send STOP_FOLLOWING command" << std::endl; return -1; } } usleep(500000); // SLEEP FOR 0.5 SECOND TO GIVE THE SERVER TIME TO PROCESS STOP_FOLLOWING COMMAND snprintf(command, sizeof(command), "plan_to:%.2f,%.2f,%.2f", goal_vtf.x(), goal_vtf.y(), goal_vtf.z()); // Send the command to the server using pipe_client_send_control_cmd if (pipe_client_send_control_cmd(ch_mapper, command) == 0) { std::cout << "Sent PLAN_TO command: " << command << std::endl; } else { std::cerr << "Failed to send PLAN_TO command" << std::endl; return -1; } usleep(500000); // SLEEP FOR 0.5 SECOND TO GIVE THE SERVER TIME TO PROCESS PLAN_TO COMMAND snprintf(command, sizeof(command), FOLLOW_PATH); // FOLLOW_PATH = "follow_path" if (pipe_client_send_control_cmd(ch_mapper, command) == 0) { std::cout << "Sent FOLLOW_TO command: " << command << std::endl; } else { std::cerr << "Failed to send FOLLOW_TO command" << std::endl; return -1; } return 0; } ////////////////////////////////////////////////////////////////////////////////////////////////////////// //VIO HELPER FUNCTIONS (FROM VOXL-MAPPER) void MapperCmd::_vio_helper_cb(__attribute__((unused)) int ch, char *data, int bytes, __attribute__((unused)) void *context){ // validate data int n_packets; pose_vel_6dof_t *d = pipe_validate_pose_vel_6dof_t(data, bytes, &n_packets); // if there was an error OR no packets received, just return; if (d == NULL){ std::cerr << "[VIO] ERROR: Failed to validate pose_vel_6dof_t data" << std::endl; return; } if (n_packets <= 0){ std::cerr << "[VIO] ERROR: No packets received" << std::endl; return; } for (int i = 0; i < n_packets; i++) { rc_tf_ringbuf_insert_pose(&pose_buffer, d[i]); } // std::cout << "[VIO] GOT NEW" << std::endl; return; } void MapperCmd::_vio_connect_cb(__attribute__((unused)) int ch, __attribute__((unused)) void *context){ printf("Connected to VIO server\n"); } void MapperCmd::_vio_disconnect_cb(__attribute__((unused)) int ch, __attribute__((unused)) void *context){ printf("Disconnected from VIO server\n"); } ////////////////////////////////////////////////////////////////////////////////////////////////////////// //MAPPERCMD HELPER FUNCTIONS void MapperCmd::cmdHandler(){ //GET CURRENT POSITION rc_tf_t tf_body_wrt_fixed = RC_TF_INITIALIZER; int ret_cmd = -2; if (getRobotPose(tf_body_wrt_fixed, my_time_monotonic_ns()) != 0) { return; //EARLY EXIT IF WE FAIL TO GET THE CURRENT POSITION } Eigen::Vector3d current_pos = Eigen::Vector3d(tf_body_wrt_fixed.d[0][3], tf_body_wrt_fixed.d[1][3], tf_body_wrt_fixed.d[2][3]); // std::cout << tf_body_wrt_fixed.d[0][3] << std::endl; // std::cout << tf_body_wrt_fixed.d[1][3] << std::endl; // std::cout << tf_body_wrt_fixed.d[2][3] << std::endl; if (pts_togo == 0){ std::cout << "MISSION COMPLETE" << std::endl; return; } Eigen::Vector3d goal_pos = square_mission.row(pts_togo - 1).transpose() + current_pos; //START GOAL PLAN, IF WE HAVEN'T ALREADY if (!goal_plan){ //SEND INITIAL GOAL ret_cmd = sendPlanCommand(goal_pos); if (ret_cmd != 0){ std::cerr << "[MAPPER-CMD] Failed to send initial goal" << std::endl; return; } goal_plan = true; //CHECK POTENTIAL RACE CONDITION IF USING MULTIPLE DRONES } else{ if((current_pos - (square_mission.row(pts_togo)).transpose()).norm() < 0.1){ //SEND NEW GOAL ret_cmd = sendPlanCommand(goal_pos); if (ret_cmd != 0){ std::cerr << "[MAPPER-CMD] Failed to send new goal" << std::endl; return; } } else{ // std::cout << "WAITING FOR GOAL" << std::endl; return; } } if (pts_togo > 0) { //EXTRA SAFETY CHECK pts_togo--; } } void MapperCmd::_cmdHandler_cb() { int64_t next_time = 0; while (!stop_thread) { { std::lock_guard<std::mutex> lock(cmd_mutex); // std::cout << "[CMD HANDLER] Iteration start" << std::endl; cmdHandler(); } // std::cout << "[CMD HANDLER] Sleeping for 20Hz cycle." << std::endl; my_loop_sleep(20.0, &next_time); //20HZ, MIGHT BE TOO FAST OR TOO SLOW... } // std::cout << "[CMD HANDLER] Thread stopping." << std::endl; } void MapperCmd::_cmdHandler_terminate_cb() { stop_thread = true; if (cmd_thread.joinable()) { cmd_thread.join(); } } void MapperCmd::initMPA(){ //MAPPER ret_mapper = pipe_client_open(ch_mapper, PLAN_LOCATION, "zbft-mapper-cmd", EN_PIPE_CLIENT_AUTO_RECONNECT, 1024 * 1024 * 64); //NOT SURE ABOUT THE BUFFER SIZE HERE...JUST COPYING FROM VOXL-MAPPER if (ret_mapper != 0) { std::cerr << "[MAPPER] Failed to open the pipe client." << std::endl; return; } std::cout << "[MAPPER] Pipe client opened successfully." << std::endl; //VIO ret_vio = pipe_client_open(ch_vio, BODY_WRT_FIXED_POSE_PATH, "zbft-mapper-cmd", EN_PIPE_CLIENT_SIMPLE_HELPER | EN_PIPE_CLIENT_AUTO_RECONNECT, POSE_6DOF_RECOMMENDED_READ_BUF_SIZE); if (ret_vio != 0){ std::cerr << "[VIO] Failed to open the pipe client." << std::endl; return; } std::cout << "[VIO] Pipe client opened successfully." << std::endl; //CMD THREAD cmd_thread = std::thread(&MapperCmd::_cmdHandler_cb, this); }
Observation 1:
My goal was to not modify the voxl-mapper package, but if you decide that integrating mapper-commander to voxl-mapper is better for you, you can make the ring buffer variable a shared pointer -- this should save some memory allocation/usage cost.Observation 2:
I am not using librc-math library, so I copied the source files needed for ring buffer operations -- similarly to what is done in voxl-mapper, but different than what is done in voxl-vision-hub.Observation 3:
You may not need a ring buffer variable, but for more complex routines it might be interested to store past VIO estimates. If you decide to not use the ring buffer variable, make sure to add the proper mutex to avoid race conditions from the VIO pipe client thread and the command handler thread when dealing with the current pose variableFuture TODOs (it would be great to hear your insights):
- ADD CHECK IF DESIRED SETPOINT WAS ILLEGAL -- maybe elapsed time based?
- POTENTIALLY SEND STOP_FOLLOWING BEFORE INITIAL POINT -- in case you want to repeat the maneuver without resetting voxl-mapper
DISCLAIMER:
Fly offboard at your own discretion and responsibility!Full Github repo:
https://github.com/zauberflote1/zbft-mapper-commander/tree/main
Please let me know your thoughts!As always happy to contribute,
--ZBFT -
RE: IMX412 Autoexposure Compensation
@ctitus
You can set up a custom camera configuration#!/bin/bash ################################################################################ # # This file allows setting of custom camera configurations by defining what # sensors are plugged into which port. # # There is no guarantee that any arbitrary configuration will work. Only the # predefined camera configs at https://docs.modalai.com/voxl2-camera-configs/ # are supported. # # each slot can be one of the following sensors: # # pmd-tof # pmd-tof-liow2 # ov7251 # ov7251-combo # ov9782 # ov9782-combo # ar0144 # ar0144-fsin (for use with m0173 and Starling 2 only) # ar0144-fsin-combo (for use with m0173 and Starling 2 only) # imx214 # imx412 # imx412-fpv (low latency mode) # imx678 # # # When using a combo mode pair, you must also add a JX_COMBO_MODE field which # specifies is the pair is to be set up as either a stereo pair ordered # left-right, right-left, or treated as two independent cameras. If you are # using a combo-mode flex but only physically connect one of the two cameras # then select "single" it does not matter which one you connect. # # For example: # J6_LOWER_COMBO_MODE="left-right" # J6_LOWER_COMBO_MODE="right-left" # J6_LOWER_COMBO_MODE="independent" # J6_LOWER_COMBO_MODE="single" # # when running "independent" combo mode, also add a NAME2 field for the second cam # e.g J6_LOWER_NAME2="tracking_rear" # # for all sensors except TOF, you can specify a "rotate" flag to # rotate the image 180 degrees e.g. J6_LOWER_ROTATE="true" or "false" # for stereo pairs (e.g. ov7251 combo or ar0144-slave-strereo) you can specify # "rotate-first-only" or "rotate-second-only" to only rotate one, other wise # when setting the rotate option to "true" both will be rotated. # # # Once configured, this file should live in /data/modalai/custom_camera-config.txt # then run voxl-configure-cameras custom to load it in # # # cp /usr/share/modalai/voxl-camera-server/custom_camera_config.txt /data/modalai/ # # We also suggest changing your /data/modalai/sku.txt file to have a camera # config term "CC" such as MRB-D0005-4-V2-CC ################################################################################ J6_LOWER_SENSOR="" J6_LOWER_NAME="" J6_LOWER_ROTATE="false" J6_UPPER_SENSOR="" J6_UPPER_NAME="" J6_UPPER_ROTATE="false" J7_LOWER_SENSOR="" J7_LOWER_NAME="" J7_LOWER_ROTATE="false" J7_UPPER_SENSOR="" J7_UPPER_NAME="" J7_UPPER_ROTATE="false" J8_LOWER_SENSOR="" J8_LOWER_NAME="" J8_LOWER_ROTATE="false" J8_UPPER_SENSOR="" J8_UPPER_NAME="" J8_UPPER_ROTATE="false"
-
RE: IMX412 Autoexposure Compensation
That's interesting. I think the modalai exposure package might not be directly compatible with the hires camera then -- I have to test it myself, although I have the IMX 214 not the 412. Additionally, I am pretty sure I was able to use the lme_msv before with the Hires coupled with less aggressive parameters, but I don't recall if I made modifications to the source code -- it's been 5 months since I came across this.
I'll take a closer look in my modified ModalAI packages directory and will upload any necessary patches for the latest SDK release related packages.If you can, please upload your full .conf and sample images in the meantime.
-
Learning how to lock a thread to a CPU core
Hello,
If you are developing your own custom pipelines, applications, and modifying existing source code from ModalAI, this may be applicable to you.Motivation: Control which CPU core is responsible for a certain thread in your custom application.
Background: It all started when I was observing the htop behavior. Sometimes the dynamic core allocation, overloads a particular core instead of keeping the effort evenly distributed, potentially causing issues if you have a "real-time" sensitive application. -- Assuming you have already set the priority and scheduler properly...
Tackling the issue: If you look in the qvio-server source files, you will find that a particular computer vision thread is locked to CPU core 7 -- known to be the fastest VOXL2 core. Then, what if we use the same idea in a custom application (potentially for a different core), would it work? YES.
How to do it:
PS1: I rather use ROS over modalpipes, so the full example linked is a for a modified IMU server that controls the driver entirely via ROS, but you can do the same for a modalpipe implementation WLOG.Using the heavenly chosen language, C++, and ROS syntax
void _set_affinity_to_core_7() { cpu_set_t cpuset; pthread_t thread = pthread_self(); //SET IT UP TO CORE 7 CPU_ZERO(&cpuset); //change this to the core you want CPU_SET(7, &cpuset); if (pthread_setaffinity_np(thread, sizeof(cpu_set_t), &cpuset)) { perror("pthread_setaffinity_np"); } //CHECK IF IT WAS SET UP if (pthread_getaffinity_np(thread, sizeof(cpu_set_t), &cpuset)) { perror("pthread_getaffinity_np"); } ROS_INFO("IMU read thread is now locked to core: "); for (int j = 0; j < CPU_SETSIZE; j++) { if (CPU_ISSET(j, &cpuset)) { ROS_INFO(" %d", j); } } ROS_INFO("\n"); }
In the main() and using boost, define the lambda as follows:
std::vector<boost::thread> threads; for (int i = 0; i < 1; ++i) { if (imu_enable[i]) { //IF YOU DONT WANT TO LOCK A SPECIFIC CORE, THEN // threads.push_back(boost::thread(boost::bind(&readThreadFunc, i))); threads.push_back(boost::thread([i]() { _set_affinity_to_core_7(); readThreadFunc(i); })); } }
PS2: Full Example can be found here:
https://github.com/zauberflote1/voxlSyncVINS/blob/91d74b57cacd78db87c6bd48ac03f14ca36874ab/imu_ros/src/voxl_imu_ros/server/src/node.cpp#L43C32-L43C33Final Remarks:
Only do this if you know what you are doing as you may overload a core by accident.All the best,
ZBFT -
RE: IMX412 Autoexposure Compensation
Hello,
Long time I haven't posted here, but I ran into similar issues before.All you need to do is to change the ae_mode field to either off or lme_msv -- I think lme_msv works with hires natively, but double check it using the latest voxl-camera-server. Also note that most IMX hires sensors are rolling shutter, so forcing drastic parameters will cause the line delay effect to be more apparent.
Pro tip, reducing the picture dimension will reduce the computational effort of the lme_msv
Below are the fields you should add for lme_msv
"ae_mode": "lme_msv", "ae_desired_msv": 60, ##CHANGE "exposure_min_us": 0, ##CHANGE "exposure_max_us": 1000, ##CHANGE "gain_min": 54, ##CHANGE "gain_max": 8000, ##CHANGE "exposure_soft_min_us": 50, ##CHANGE "ae_filter_alpha": 0.600000023841858, ##CHANGE "ae_ignore_fraction": 0.20000000298023224, ##CHANGE "ae_slope": 0.05000000074505806, ##CHANGE "ae_exposure_period": 1, "ae_gain_period": 1
All the best,
ZBFT -
Math Behind Fusion of FIXED POSE (AprilTag Pose)
Hello,
I am curious about the math behind the "fusion" of the fixed pose coming from the apriltags. From what I understood reading pose_filter.c and geometry.c, the filter uses a ring buffer and finds the average of the values buffered as the new correct values of x, y, z, and yaw. Could anyone confirm this? -
RE: voxl-open-vins-server How to Use, Overall Questions, ROS/ROS2 Findings
@zauberflote1
Forgot to mention that a way to reduce jitter is also by checking if the interrupt DTA Rdy is true before reading, avoiding reading past info from the registers.
Sorry for the many messages -
RE: voxl-open-vins-server How to Use, Overall Questions, ROS/ROS2 Findings
@Cliff-Wong
Image Description:
1)BsicRead Server Using MPA-ROS
2)Classic Server and MPA-ROS
3)IMU ROS ~500Hz
4)IMU ROS ~200Hz -
RE: voxl-open-vins-server How to Use, Overall Questions, ROS/ROS2 Findings
@Cliff-Wong Hey Cliff,
Thanks for your reply! I have solved the issue. It's a bit of a long solve, but I'll outline the main steps.
For proper calibration IMU-Cam calibration, we cannot have a burst of measurements like we have when running the IMU in FIFO mode. Therefore, the best way to get around this is to use IMU in basic read mode. Also, I was able to achieve good VINS results running the IMU at 500Hz and 200Hz. I opted to remodel the IMU server to have it "fully" in ROS without the need for MPA, while maintaining the FIFO scheduler for "RT". This decision reduced the sampling jitter by a lot (about 50%). Still, jitter is present, while looking at the core load distribution, I noticed that sometimes all the processes get allocated to core 7 until it bounces back to another favorable core distribution, creating a tiny lag on the system and, consequently, causing the jitter -- of course, this is a hypothesis. I was wondering if you had faced the same situation and if you or your teammates have thought of the possibility of restricting specific cores for real-time operations, which would avoid this weird allocation.
Here are the results:
Best,
--ZBFT