We are incorporating the VOXL2 into an enclosed payload. Are there specific components that are main sources of heat that we should take extra precaution at dissipating?
Posts made by jaredjohansen
-
VOXL2 thermal management
-
RE: Setting up micrortps bridge on VOXL2/Sentinel
Thanks, @modaltb! What you've pointed me to is very helpful! I'll look into it!
-
RE: Setting up micrortps bridge on VOXL2/Sentinel
@James-Strawson , I saw that you worked on the original voxl-px4-shell.
Do you know if there is a way to access the PX4's nuttx
nsh>
shell on the VOXL2?(For ROS2 to communicate with the PX4 on the VOXL2 through the
micrortps_bridge
, it is my understanding that I need to start themicrortps_client
via the PX4 nuttx shell.) -
RE: Setting up micrortps bridge on VOXL2/Sentinel
Awesome. That is super helpful! I was able to follow those instructions and access the
pxh>
shell.According to these instructions, if I have access to the NuttShell/System Console (
nsh>
shell, I believe), I should be able to start themicrortps_client
running.Is there a way to access the
nsh>
shell on the VOXL2?(For others, I'll note that the
nsh>
shell is available on the VOXL1, which is nice to simulate things like the GPS signal being lost as described here.) -
RE: Setting up micrortps bridge on VOXL2/Sentinel
Got it. Thanks. One more follow-up question.
The VOXL1 had a tool called
voxl-px4-shell
that gave you access to the PX4 console. I don't see something like that for VOXL2. Will that be a future capability? If so, do you have an estimate on when it'd be available? Thanks! -
VOXL coordinate frame NED/FRD -> ENU/FLU
The VOXL's coordinate frame is NED/FRD (as described in this video). This is inline with typical aerospace applications.
ROS and most of the robotics world (see bottom of page 4) use the ENU/FLU coordinate frame.
I would like to use the voxl-qvio output as
odom
data. The output of this service is in the NED/FRD coordinate frame. Is there some way to have QVIO output the drone's coordinates in the ENU/FLU coordinate frame instead? -
RE: Setting up micrortps bridge on VOXL2/Sentinel
Oops. Sorry, @Eric-Katzfey! I meant to say @Alex-Kushleyev.
@Alex-Kushleyev, any help/instructions/pointers you can give?
-
Setting up micrortps bridge on VOXL2/Sentinel
I'm trying to setup a
micrortps_bridge
(https://docs.px4.io/main/en/ros/ros2_comm.html) on a VOXL2/Sentinel.On one side of the
micrortps_bridge
, I have a Docker container.
This Docker container has Ubuntu 20.04, ROS2 foxy, a number of supporting libraries (e.g., gradle, Fast-DDS) and some custom ROS2 SW. From within the Docker container, I am able to run themicrortps_agent
and ROS2 SW. It appears that everything on this side of themicrortps_bridge
is working.On the other side of the
micrortps_bridge
is themicrortps_client
. I am uncertain how to properly set this up on the VOXL2. I found this post (which attempts to setup the bridge on the VOXL1 but had some issues) and this post (which sets up ROS1 on VOXL2). Neither are quite what I am looking for.I am wondering if there is any guidance on how to do setup a
micrortps_bridge
on the VOXL2. @Eric-Katzfey, you were involved in the post from a year ago with @ryan_meagher and @benjamin-linne that attempted to get themicrortps_bridge
working on VOXL1. Is there any help/instructions/pointers you can give me? -
voxl2-mpa-to-ros2
I know that you can install ROS2 on the VOXL2.
I'm familiar with the
voxl-mpa-to-ros
command on the VOXL1 (and how it calls a roslaunch file that starts thevoxl_mpa_to_ros_node
).Are there plans to create the equivalent of a
voxl2-mpa-to-ros2
that publishes all of the VOXL2 Flight Deck sensor data as ROS2 topics? (I hope so!) This would be really handy for anyone using the VOXL2, rB5, or Sentinel to develop as part of a ROS2-enabled system. -
VOXL2 TOF Support
I've looked over the VOXL2 documentation and found that there are 3 camera groups and various camera configs.
I also read that the TOF sensor is not yet supported. Do you have an approximate timeline for when the TOF sensor would be supported?
When the TOF sensor is supported, would we be able to plug in 2-3x TOF sensors simultaneously?
-
Can RB5 drone be setup as hotspot?
Can the RB5 drone be setup as a hotspot?
For example, suppose the drone is connected to the internet via its 5G capability. Could cell phones or laptops connect to the RB5's wifi and access the internet via the rb5 acting as a hotspot?
-
TOF Sensor Range
The datasheet for the VOXL TOF sensor states that its range is 4-6m. To get a furthest range, you need to reduce the frame rate to the minimum (i.e., 5 fps, I believe). In practice, if I reduce the frame rate to something quite low (e.g., 10 fps), and if I stand in front of the sensor, I begin to fade out around 2.5m. By 3m, I am lost almost completely to noise.
I'd like to use the TOF sensor for mapping an indoor environment. However, with a range of ~3m, when the drone is in the middle of a room, it is unable to see the walls. In that situation, many SLAM algorithms don't work well since the pointcloud data is "empty" and they are unable to do scan matching from frame to frame. Drift and other undesirable behavior tend to occur.
For the package size, the VOXL TOF sensor is incredible. It's just not well suited for the particular application I am interested in. I am wondering if ModalAI knows of a product that would be comparable to the VOXL TOF sensor, but with an increased range (on the order of 12m).
I have looked for similar products online, but have not found anything meeting the desiderata of low SWaP and ~12m range:
- AD-96TOF1-EBZ (<6m range)
- IMX556PLR (<6m range)
- phototonics time-of-flight-sensors (4m range)
- 3d-tof-sensor-module (4m range)
- terabee-3dcam (4m range)
- helios2 (8m range, but 400g)
- AD-FXTOF1-EBZ (3m range)
- AMS time-of-flight (0.5m range)
One product that seems semi-tailored to my problem is an rplidar like this one. It has a 12m range, 360 degree FOV, is fairly inexpensive ($320)...but weighs a decent amount (190 grams) and is 2D instead of 3D.
I'd like to ask two questions:
(1) Do you know of a product that would be comparable to the VOXL TOF sensor, but with an increased range (on the order of 12m)?
(2) Have you looked at designing/integrating any products that would be well suited for the use-case I described (e.g., rplidar)? -
RE: TOF SLAM
@Alex-Gardner, I was wondering if you have a timeline when when your current SLAM work would be released (e.g., working demo code). Thanks!
-
Timeline for voxl-like services
Will certain voxl services be added to the rb5 SW base? If so, do you have an approximate timeline for when they would be released?
I'm particularly interested in docker-daemon, voxl-inspect-*, voxl-portal, and voxl-tflite-server.
Thanks!
-
RE: TOF Depth Image Encoding changed from 32FC1 to mono8
For anyone who runs into this same issue, I changed my subscriber to listen to
/map/tof_pc
(which has thePointCloud2
type). My code (to extract the z value) is this:# get the depth image dep_frame = np.fromstring(data.data, dtype=np.single) # get numpy array dep_frame = dep_frame.reshape(172, 224, 3)[:,:,2]
-
TOF Depth Image Encoding changed from 32FC1 to mono8
I have a TOF sensor.
A little while ago, to see the depth image, I would
rostopic echo /tof/depth
. When I did that, the ROS message would be of the typesensor_imgs/Image
and its encoding would be32FC1
.Today, with the latest code, I need to use
rostopic echo /mpa/tof_depth
to get the depth image. When I do that, the ROS message is still of the typesensor_imgs/Image
but its encoding ismono8
.Historically, if I wanted to process the depth image in ROS, I would setup a subscriber and use code like that shown below to ingest the data as a depth image. I could then access individual pixels to get the depth (in meters).
I'm now sure how to process a depth image of encoding
mono8
. It seems like it would have too few bits to represent the resolution of depths possible.Is the
mono8
encoding intentional (with the conversion to the latest mpa_to_ros)? Or is it a bug?If it's intentional, can you help me understand how to process this new data encoding and extract the distances of individual pixels (similar to the code posted below)?
# get the depth image dep_frame = np.fromstring(tof_depth.data, dtype=np.single) dep_frame = dep_frame.reshape(172, 224) # Get closest value in bounding box object_depth_map = dep_frame[top:bottom,left:right] object_depth_map[object_depth_map == 0] = 9999.00 object_depth = np.min(object_depth_map)
-
RE: TOF sensor not working with latest VOXL SW
I figured out my issue.
On the VOXL, I had
ROS_HOSTNAME=localhost
. When I ranunset ROS_HOSTNAME
, the companion PC was able to connect properly. -
RE: TOF sensor not working with latest VOXL SW
@Matt-Turi, I check whether any connections were loose. There were none. I re-seated all the connectors anyway, but it made no difference.
I ended up purchasing a new TOF sensor. After I installed it, it worked! I am able to see the TOF outputs in
voxl-portal
.I have one last issue, which could be related to the VOXL SW, so I'm keeping it in this thread instead of starting another one. After confirming the TOF sensor was working properly in
voxl-portal
, I wanted to see the pointcloud in rviz.I followed the steps described here to view the TOF sensor data in rviz. On a companion linux PC, I would connect to the VOXL over wifi. I could successfully ping going both ways. I ran the appropriate export statements (as described in the TOF sensor User Guide). But when I tried to run
rviz
orrostopic list
, I would get the error messageERROR: Unable to communicate with master!
. I tried this with two companion Linux PC's (Ubuntu 18.04 with ROS melodic and Ubuntu 20.04 with ROS noetic) and got the same results on both.It seems very much a networking issue. In searching ofther ROS forums, many attribute this error to a firewall being enabled. I've disabled the firewall on both companion Linux PC's but that didn't resolve the issue.
I'm scratching my head because it appears that everything is fine network-wise, but ROS is unable to connect. It seems very much like a ROS/networking issue (and I still think that is what it is), but I am at the point where I figure it wouldn't hurt to ask the following question: is there anything in the latest ModalAI SW that could be affecting this? If not, is there anything obvious I could be missing? Thanks in advance!