@tom Sorry about that, I thought I had answered this.
Would you be able to tell me what the default state is? I really can't recall if they've been moved since we got it.
@tom Sorry about that, I thought I had answered this.
Would you be able to tell me what the default state is? I really can't recall if they've been moved since we got it.
Hi,
We wanted to start using the ethernet expansion board for the SD card slot and additional USB ports. But I notice there is nothing written in the user guide (link). Namely we're curious how we should be setting the switches in the SW1 component, among other uncertainties which can arise when we begin to use the product.
@Alex-Kushleyev Sorry for the delay, didn't notice the alert. But great, thanks!
Hi,
I see here that the voxl 4-in-1 mini ESC is rated for 2-4s (i.e. up to 16.8V at max charge). Maybe this is a stupid question, but is it possible to run LiHV 4s batteries with this ESC? The max voltage would then be 4.35*4 = 17.4.
I realize this is above the rated specs, but I was curious if the difference was small enough that it was still within tolerable margins?
Thanks
@Alex-Kushleyev Thanks for getting back to me, I must've missed reading the part regarding the CPU frequencies, now that you say it seems obvious.
the IMU frequency is not exactly 1Khz because the IMU is running using its own internal oscillator, which is does not perfectly align with 1000Hz output frequency, so the closest frequency the IMU can achieve is about 976Hz or so.
This is interesting actually, because the dt and sampling rate seems to consistently be slightly higher than 1kHz, approximately 1.03kHz. Can this make sense for the IMU?
Finally, you are absolutely right that the ROS publisher queue size being set to 1 is causing the messages to be dropped because they are published too quickly and are simply discarded when queue overflows.
I really don't see the down side of increasing the buffer size and i agree it should be set to at least the sample_rate / fifo_poll_rate , which is the number of samples that would be published at a time.
I will discuss this with the team.. thank you for pointing out the issue and good job figuring it out!
I didn't see a major change in CPU consumption, at least of the two solutions increasing the FIFO queue cost much more CPU than changing the queue size did. I would offer to make a PR if the change was more complicated than just changing a number
Anyway thanks. I'm a bit curious about the IMU, but otherwise I think this is solved.
Morten
@Alex-Kushleyev Hi again, Sorry to keep spamming. I just wanted to make sure this is up-to-date as I progress.
A bit silly I didn't realize before, but I think the problem lies in the FIFO buffer reading vs IMU queue size in the voxl-mpa-to-ros
, that being said the conversion from pipe to ros also costs some latency it seems.
In voxl-mpa-to-ros
, there is the following
m_rosPublisher = m_rosNodeHandle.advertise<sensor_msgs::Imu>(topicName, 1);
Given that the FIFO buffer (with default settings) reads 10 IMU messages at a time, I think this should be changed to
m_rosPublisher = m_rosNodeHandle.advertise<sensor_msgs::Imu>(topicName, 25);
so it has a little extra depending on what happens. This results in a very large improvement:
Getting data from log0004 and data_incr_queue.bag
voxl-logger
mean: 0.9766417070988479 ms
std dev: 0.0014018018261035131 ms
total messages: 61363
rosbag record
mean: 0.9766411567432235 ms
std dev: 0.02937541759378401 ms
total messages: 59305
although you can see there are still some dropped measurements and still a worsening of the standard deviation. I realized the IMU reading can also directly publish to ros from the voxl-imu-server
, and this seems a little more benefit as well
rosbag record
mean: 0.9768075697417741 ms
std dev: 0.012558722853297727 ms
total messages: 61147
still not totally without message dropping. This also still has some worsening about the standard deviation, I wonder if that's simply due to the conversion in _clock_monotonic_to_ros_time
.
@Morten-Nissov Note, although this does make things look better, there still is a pretty significant difference between the voxl logger and recording a rosbag, with a relatively non-insignificant performance difference between the two.
Also, not really sure why the voxl-logger doesn't manage a clean 1kHz, following the configuration. Seems to be consistently off the commanded value.
@Alex-Kushleyev Followup, I can change the fifo_poll_rate_hz
parameter and this has significantly increased the performance at the cost of significantly increase CPU load on the slow cores. Maybe you can advise how we should proceed.
"imu0_enable": true,
"imu0_sample_rate_hz": 1000,
"imu0_lp_cutoff_freq_hz": 92,
"imu0_rotate_common_frame": true,
"imu0_fifo_poll_rate_hz": 500,
Name Freq (MHz) Temp (C) Util (%)
-----------------------------------
cpu0 1804.8 41.7 44.95
cpu1 1804.8 41.7 33.94
cpu2 1804.8 42.0 46.88
cpu3 1804.8 42.4 44.00
cpu4 2419.2 40.5 0.00
cpu5 2419.2 40.5 0.00
cpu6 2419.2 39.7 0.00
cpu7 2841.6 40.1 0.00
Total 42.4 21.22
10s avg 15.10
-----------------------------------
GPU 305.0 38.6 0.00
GPU 10s avg 0.00
-----------------------------------
memory temp: 39.0 C
memory used: 746/7671 MB
-----------------------------------
Flags
CPU freq scaling mode: performance
Standby Not Active
-----------------------------------
Note even with this change some IMU messages are still dropped:
Getting data from log0003 and data_perf_fifo500.bag
voxl-logger
mean: 0.9766390443514245 ms
std dev: 0.001688066415815054 ms
total messages: 61397
rosbag record
mean: 1.0005186426507142 ms
std dev: 0.15095098578685434 ms
total messages: 59599
@Alex-Kushleyev Sorry for the delay, some data for you in different configurations:
Getting data from log0000 and data.bag
voxl-logger
mean: 0.9766542020947352 ms
std dev: 0.0013052458028126252 ms
total messages: 61393
rosbag record
mean: 3.230353079590143 ms
std dev: 2.992801413066607 ms
total messages: 18361
----------
Getting data from log0001 and data_nodelay.bag
voxl-logger
mean: 0.9766513801209619 ms
std dev: 0.0013052148029740315 ms
total messages: 61342
rosbag record
mean: 3.2565453276935123 ms
std dev: 3.0139953610144303 ms
total messages: 17860
----------
Getting data from log0002 and data_perf.bag
voxl-logger
mean: 0.976641095612789 ms
std dev: 0.001267007877929727 ms
total messages: 61384
rosbag record
mean: 3.73874577006071 ms
std dev: 3.513557787123189 ms
total messages: 15819
----------
Note significant differences in messages for each, indicating lots of message drop by the ros node.
Considering the sampling rate and standard deviation for different configurations. Logging was done with voxl-logger -t 60 -i imu_apps -d bags/imu_test/
and rosbag record /imu_apps -O data --duration 60
. I tried setting the rosbag to record with --tcpnodelay
, but this seems to have no effect. During the performance run, the cpu usage was as follows:
Name Freq (MHz) Temp (C) Util (%)
-----------------------------------
cpu0 1804.8 40.1 22.33
cpu1 1804.8 39.7 18.10
cpu2 1804.8 40.1 17.82
cpu3 1804.8 39.7 16.16
cpu4 2419.2 39.0 0.00
cpu5 2419.2 38.6 0.00
cpu6 2419.2 37.8 0.00
cpu7 2841.6 39.0 0.00
Total 40.1 9.30
10s avg 12.26
-----------------------------------
GPU 305.0 37.4 0.00
GPU 10s avg 0.00
-----------------------------------
memory temp: 37.8 C
memory used: 734/7671 MB
-----------------------------------
Flags
CPU freq scaling mode: performance
The only thing which seems out of the ordinary here is that many CPUs are operating consistently at low frequency. Maybe you have a comment on why that could be? Otherwise, it seems the transfer from the voxl-imu-server to ros causes quite a bit of loss in messages. Here is some voxl 2 mini install information:
--------------------------------------------------------------
system-image: 1.7.8-M0104-14.1a-perf
kernel: #1 SMP PREEMPT Sat May 18 03:34:36 UTC 2024 4.15
--------------------------------------------------------------
hw platform: M0104
mach.var: 2.0.0
--------------------------------------------------------------
voxl-suite: 1.3.3
--------------------------------------------------------------
with this IMU config:
"imu0_enable": true,
"imu0_sample_rate_hz": 1000,
"imu0_lp_cutoff_freq_hz": 92,
"imu0_rotate_common_frame": true,
"imu0_fifo_poll_rate_hz": 100,
"aux_imu1_enable": false,
"aux_imu1_bus": 1,
"aux_imu1_sample_rate_hz": 1000,
"aux_imu1_lp_cutoff_freq_hz": 92,
"aux_imu1_fifo_poll_rate_hz": 100,
"aux_imu2_enable": false,
"aux_imu2_spi_bus": 14,
"aux_imu2_sample_rate_hz": 1000,
"aux_imu2_lp_cutoff_freq_hz": 92,
"aux_imu2_fifo_poll_rate_hz": 100,
"aux_imu3_enable": false,
"aux_imu3_spi_bus": 5,
"aux_imu3_sample_rate_hz": 1000,
"aux_imu3_lp_cutoff_freq_hz": 92,
"aux_imu3_fifo_poll_rate_hz": 100
@Alex-Kushleyev Thanks! I'll try this, note I should have mentioned this is with a voxl2 mini computer and on the ROS side the only thing happening is recording to rosbag.
Morten
Hi,
We've lately been noticing a lot of inconsistency in the publishing frequency and sample rate (from time stamp) of the IMU when publishing to ROS with the voxl-mpa-to-ros tool using SDK 1.1.3. This error is showing similar symptoms as other people (like this but doesn't seem to be fixed by setting CPU to performance mode. Using voxl-inspect-imu shows a dt of about 1ms, corresponding well to 1kHz (with the default IMU config).
We diagnose this problem primarily using rostopic hz
as well as with the following plot of IMU sampling rate we noticed while doing VIO calibrations. We tried also setting the sampling rate to 200z instead, to see if this would alleviate some pressure, but still there is quite a lot of inconsistency (of course worse with 1kHz).
This has been tested with very few other services running in the background, so compute usage when this occurs is minimal. The only thing running with any noticeable compute usage is voxl-px4
I was curious if this is expected behavior or if you had any suggestions for how to troubleshoot/debug/solve this problem?
@Alex-Kushleyev Do you think it is possible to reuse params between drivers? E.g. those which shouldn't be different between the too like baud, rpm min, rpm max, etc?
I was tempted to implement the extra serial read in the voxl_esc driver for this reason, to avoid doubly defining parameters which should be the same, but this is probably more complicated overall.
Also, the mini ESC does not have the jumper to modify the IDs, sorry!
Ah unlucky, but thanks for checking.
@Alex-Kushleyev Sorry for not responding, and thanks for getting back to me.
I was curios if this jumper functionality was available on the mini? I'm guessing no?
Otherwise we'll proceed with your recommendations, it doesn't seem so challenging to implement these changes.
Edit: Just a quick question for a bug we found early on. I've made a copy of the driver and am trying to build it, but I am running into the following error message:
Duplicate parameter definition: VOXL_ESC_FUNC1+
I have removed entries from the params.c to make sure things are not doubly defined, but this one is not even defined there:
// The following are auto generated params from control allocator pattern, put here for reference
// Default ESC1 to motor2
//PARAM_DEFINE_INT32(VOXL_ESC_FUNC1, 102);
//PARAM_DEFINE_INT32(VOXL_ESC_FUNC2, 103);
//PARAM_DEFINE_INT32(VOXL_ESC_FUNC3, 101);
//PARAM_DEFINE_INT32(VOXL_ESC_FUNC4, 104);
I've looked for other references to VOXL_ESC_FUNC1
but I am not finding the right place, would you have any idea why this is now getting doubly defined.
@Alex-Kushleyev Understood.
We were thinking of the mini ESSC: https://www.modalai.com/products/voxl-esc-mini?variant=47206467371312. But that's not necessarily a hard requirement, is it written which ones have this jumper? Sorry if I missed it.
We're not opposed to doing some of this ourselves either, if it's not a feature you were planning on or wanted to add. Before seeing this message we were brainstorming re-purposing the GNSS UART for this instead, and other such more "hacky" solutions.
Morten
@Alex-Kushleyev Hi Alex,
Juset as a follow up, you say the board supports 2 4-in-1 ESCs on the same UART port. How would you then distinguish between sending commands to ESC 1 vs ESC 2, both connected to the same port?
E.g., I figure it would amount to some version of adding indexes 4-7 to
cmd.len = qc_esc_create_rpm_packet4_fb(_esc_chans[0].rate_req,
_esc_chans[1].rate_req,
_esc_chans[2].rate_req,
_esc_chans[3].rate_req,
_esc_chans[0].led,
_esc_chans[1].led,
_esc_chans[2].led,
_esc_chans[3].led,
_fb_idx,
cmd.buf,
sizeof(cmd.buf),
_extended_rpm);
but I am not quite sure how ESC1 knows it is 0->3 and ESC2 knows it is 4->7.
Hi
I was curious if it was possible to build the px4 tests in the docker environment, e.g. the unit google tests in the Actuator Effectiveness library? Sorry if it's obvious, I couldn't quite see how to add this to the existing build scripts.
Edit 2: Tried creating a class for handling only the multiplexer:
#define NUMBER_OF_TMAG5273 1
class PCA9546 : public device::I2C
{
public:
PCA9546(const I2CSPIDriverConfig& config)
: I2C(DRV_MAG_DEVTYPE_PCA9546, "pca9546", config.bus, ADA_PCA9546::I2C_ADDRESS_DEFAULT, I2C_SPEED)
{
}
bool select(uint8_t i)
{
if (i >= NUMBER_OF_TMAG5273)
{
PX4_ERR("select for index > NUMBER_OF_TMAG5273: %i", NUMBER_OF_TMAG5273);
return false;
}
this->transfer(&i, 1, nullptr, 0);
return true;
}
private:
int probe() override
{
return PX4_OK; // mux doesn't return anything
}
};
I include this as a member variable in my main driver class which is meant to read from the hall-effect sensors through the mux. This is proving to be a bit difficult as the probe
doesn't seem to succeed ever (same probe which worked with single sensor but now with a call to select(0)
first to test.
Edit3: The mux driver can read a single sensor (sending the byte to mux address first followed by the standard read exchange) so it would seem that the difficult comes with the part interfacing the mux.
Thanks to both.
We actually have a mutliplexer (PCA9546) and 4 magnetometers working on a teensy. So all should be good, it's just about converting this to px4.
@Vinny This sounds interesting because it is essentially what we would like to do, the challenge for us is that we haven't found a way to send I2C messages to the multiplex address (0x70) as well as the sensor address (0x35) from a single px4 driver class. Did you manage to do something similar?
I've been trying to create a class for the multiplexer and have the magnetometer as a member variable but it doesn't seem like this will work particularly well given the standard boilerplate for px4 drivers. At least to me it looks that way. I assume it's the same for you, that the multiplex just needs a single byte sent to select which sensor is enabled, if you've found an easy way to send this byte from an I2C driver with a different address then I would be interested in hearing how to do that.
Edit: I mean this in particular for bus 1 on header J19. I believe this bus is on the DSP. I think this is relevant for how I do I2C at a low level right? E.g., on linux using ioctl and such to write a byte. I mention this because I tried to implement this with ioctl and the docker container returns
fatal error: 'linux/i2c.h' file not found
#include <linux/i2c.h>
so I figured this was maybe not 100% correct.
Hi,
In connection with another post (link) we're trying to put together some custom drivers for sensors we'd like to use with the voxl2 mini. Previously we got a driver working for a hall-effect sensor we were interested in using, but in reality we want to use 4 of them.
All 4 have the same I2C address (which could be changed given more IO) so we're interfacing these using an I2C multiplexer. I wanted to try and reuse the previously mentioned single sensor driver, but it seems not so straightforward. For example, calling the module_start
for the multiplexer driver will be a bit complicated if it in reality needs to also start the underlying hall-effect drivers. Also probing (function used by I2C class) the sensor drivers at the start is not so possible, given they will be "hidden" behind mux.
I was curious if you knew of any examples of something similar? Or any advice how to structure this. Hope it makes sense
@Alex-Kushleyev Sorry didn't see the message, but ended up getting the same place. Implemented a read multiple registers which seems to work:
void TMAG5273::RegisterReadMultiple(Register reg, uint8_t* buffer, uint8_t bytes)
{
const uint8_t cmd = static_cast<uint8_t>(reg);
transfer(&cmd, 1, buffer, bytes);
}
This turned out to be significantly faster (~4x I think), so even without 1MHz I think it could be sufficient for us. Thanks for the help!