Hi,
You won't be able to scp over the cable. If you're plugged in with a cable you'll be able to adb pull
the files. If your drone is connected to the same network as your computer (either via an ethernet cable or wifi) you'll be able to scp it.
Hi,
You won't be able to scp over the cable. If you're plugged in with a cable you'll be able to adb pull
the files. If your drone is connected to the same network as your computer (either via an ethernet cable or wifi) you'll be able to scp it.
This issue pops up from time to time and it is, without fail, a hardware issue. Can you check not just the connectors for your stereo cameras but also the cables themselves. They can develop kinks or get scratched and expose the traces if they go through any number of unfriendly environments, which will regularly lead to hal3 being unable to see the device.
Still in the somewhat early stages of development so this may be subject to change, but the I/O format will be through our Modal Pipe Architecture, publishing posix pipes to /run/mpa.
//server pipes
#define ALIGNED_PTCLOUD_NAME "voxl_mapper_aligned_ptcloud"
#define ALIGNED_PTCLOUD_LOCATION (MODAL_PIPE_DEFAULT_BASE_DIR ALIGNED_PTCLOUD_NAME "/")
#define ALIGNED_PTCLOUD_CH 0
#define TSDF_VOXELS_NAME "voxl_mapper_tsdf_voxels"
#define TSDF_VOXELS_LOCATION (MODAL_PIPE_DEFAULT_BASE_DIR TSDF_VOXELS_NAME "/")
#define TSDF_VOXELS_CH 1
#define TSDF_SURFACE_PTS_NAME "voxl_mapper_tsdf_surface_pts"
#define TSDF_SURFACE_PTS_LOCATION (MODAL_PIPE_DEFAULT_BASE_DIR TSDF_SURFACE_PTS_NAME "/")
#define TSDF_SURFACE_PTS_CH 2
#define PLAN_NAME "voxl_planner"
#define PLAN_LOCATION (MODAL_PIPE_DEFAULT_BASE_DIR PLAN_NAME "/")
#define PLAN_CH 3
#define MESH_NAME "voxl_mapper_mesh"
#define MESH_LOCATION (MODAL_PIPE_DEFAULT_BASE_DIR MESH_NAME "/")
#define MESH_CH 4
This is the current list of outputs that will be published and available. We're still working out the format of the TSDF outputs, but I can tell you more about the others.
The aligned pointcloud is mostly a visualization tool that transforms the most recent pointcloud that it integrated to match the drone's orientation, this is using a standard libmodal_pipe pointcloud structure.
The planner output a polynomial path (the format of which can be found here), this header is currently pasted in the vvpx4 and mapper projects, but will most likely be merged into the interfaces file in libmodal_pipe when we release this.
The mesh output is using a binary format of our own creation somewhat similar to a GLTF but not human-readable and thus much smaller. It will be published over the web through voxl-portal by default. The mesh is meant to have the ability to publish updates, not the entire mesh each time (though there is a command that can be given to the server to republish the whole mesh). The mesh is divided into "blocks", where the mapper maintains a list of which ones have been changed since the last publish, and then only republishes those blocks. The output will follow the format of:
{
Mesh metadata //(timestamp, how many blocks, size_bytes, etc) in format: mesh_metadata_t
{ //Array of metadata.num_blocks of the following:
Block metadata //position of block, number of vertices in block, in format: mesh_block_metadata_t
{//Array of block_metadata.num_vertices of the following:
Vertex Data //xyz,rgb, in format: mesh_vertex_t
}
}
}
The structs have definitions that are subject to change but are currently:
#define MESH_MAGIC_NUMBER (0x4d455348) //magic number, spells MESH in ascii
// TODO: put these in MPA somewhere
typedef struct mesh_metadata_t {
uint32_t magic_number; ///< Unique 32-bit number used to signal the beginning of a VIO packet while parsing a data stream.
uint32_t num_blocks; ///< Number of blocks being sent
int64_t timestamp_ns; ///< timestamp at the middle of the frame exposure in monotonic time
uint64_t size_bytes; ///< Total size of following data (num_blocks number of blocks, each containing metadata and array of vertices
double block_size; ///< Length of the edges of a block in meters
uint64_t reserved[12]; ///< Reserved fields for later use, total size of struct 64 bytes
}mesh_metadata_t;
typedef struct mesh_block_metadata_t {
uint16_t index[3];
uint16_t num_vertices;
} block_metadata_t;
typedef struct mesh_vertex_t {
uint16_t x;
uint16_t y;
uint16_t z;
uint8_t r;
uint8_t g;
uint8_t b;
} mesh_vertex_t;
As for the limitations that you were asking about, the ROS demo that we ran we used 20cm voxels with a ray length of 5 meters (the reliable range of the PMD TOF Sensor that we use). We're hoping to either get the voxel size down a bit or (scary) modify the core algo of voxblox to allow a surface between two unoccupied voxels (i.e. a wall could fall between 2 voxels and currently voxblox will backfill the voxel behind since it requires an occupied voxel) to allow for slightly higher definition walls.
UAV-wise, we have done this once on a M500 drone, but we've been trying to build this with the primary use case being indoor mapping, and thus most of what we do with this is on a seeker. The seeker comes better equipped for this with the ability size-wise to fly through doors and arriving equipped with both the TOF sensor and the stereo pair (though we have so far been primarily using the TOF for mapping, the final mapping product will allow for either/both to be used).
Let me know if you have any other questions!
Hi Lynn,
That should be built in to system image 3.3 and shouldn't show up in opkg. Can you paste the output of voxl-version
to make sure everything is set up properly. Additionally, you can check if the libraries exist manually in /usr/lib or /usr/lib64 by running:
yocto:/usr$ find | grep -i opencl
./include/CL/opencl.h
./include/adreno/CL/opencl.h
./include/opencv4/opencv2/core/opencl
./include/opencv4/opencv2/core/opencl/opencl_info.hpp
./include/opencv4/opencv2/core/opencl/ocl_defs.hpp
./include/opencv4/opencv2/core/opencl/opencl_svm.hpp
./include/opencv4/opencv2/core/opencl/runtime
./include/opencv4/opencv2/core/opencl/runtime/opencl_clamdblas.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_svm_definitions.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_svm_hsa_extension.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_svm_20.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_clamdfft.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_gl.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_core.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_core_wrappers.hpp
./include/opencv4/opencv2/core/opencl/runtime/opencl_gl_wrappers.hpp
./include/opencv4/opencv2/core/opencl/runtime/autogenerated
./include/opencv4/opencv2/core/opencl/runtime/autogenerated/opencl_clamdblas.hpp
./include/opencv4/opencv2/core/opencl/runtime/autogenerated/opencl_clamdfft.hpp
./include/opencv4/opencv2/core/opencl/runtime/autogenerated/opencl_gl.hpp
./include/opencv4/opencv2/core/opencl/runtime/autogenerated/opencl_core.hpp
./include/opencv4/opencv2/core/opencl/runtime/autogenerated/opencl_core_wrappers.hpp
./include/opencv4/opencv2/core/opencl/runtime/autogenerated/opencl_gl_wrappers.hpp
./include/tensorflow-2-2-3/tensorflow/lite/delegates/gpu/cl/opencl_wrapper.h
./lib/libOpenCL.so
./lib64/libOpenCL.so
./share/cmake-3.3/Help/module/FindOpenCL.rst
./share/cmake-3.3/Modules/FindOpenCL.cmake
./share/licenses/opencv4/opencl-headers-LICENSE.txt
yocto:/usr$
This is a known issue with our GStreamer interface running low framerates. You can install voxl-portal, which is a little web server that provides access to any of the cameras running on voxl in a web browser. If you're on our latest system image (3.3.0) and voxl-suite (0.4.6), you can run opkg update
then opkg install voxl-portal
and then pull up the drone's ip in a web browser and view image streams.
Hi Pawel,
There is a microsd slot on the voxl board, you should be able to just pop a card in there and get more space.
Hi,
This is a bug with the recently pushed 0.2.5 voxl-streamer, the config file is missing a comma after the conf-version line, a patch will be up shortly, but you can add it yourself to the file in /etc/modalai/voxl-streamer.conf to get it working in the meantime
Hi!
The hardware on voxl only supports a stereo pair out of the J3 connector. However, you could absolutely plug in fisheye cameras instead of the pinhole ones if you're just trying to get a better field of view. Additionally, the system is setup to be able to support that same ov7251 camera in both the J2 and J4 ports, so you could in theory have 4 fisheye cameras concurrently.
NOTE: voxl-camera-server (opensource) is not set up to to handle multiple cameras of the same type since we don't actively support any camera configurations like that. It's very doable though, you'd have to check the camera IDs for the configuration that you create and mod camera server if you're trying to use our standard software stack (so you can out of the box support 3 fisheye cameras, and it would only take some userspace coding to support 4)
That is interesting, we've not seen anything like this since we settled on our current AE settings, though as you say these may just be some strange lighting conditions. You can tweak the exposure values in /etc/modalai/voxl-camera-server.conf
. The changeable parameters are all of the ones with modal_ae_...
If you're interested in how it works under the hood/what chages would do, you can find the exposure source here
Hi,
It looks like the data on that documentation page was erroneously copied over from a different platform, I'll update it shortly but in the future for QVIO needs this docs page is more actively maintained by the software team.
For reference the commands in the voxl sdk for doing the things you asked about are voxl-inspect-qvio
and voxl-configure-qvio
. These are the standard formats for the commands in our SDK, so you can tab out voxl-inspect-
or voxl-configure-
to see all of the other things you can inspect or configure.
Hi,
The configure cameras script only affects the MIPI cameras plugged into J6, J7, and J8 on the board. For USB cameras you should look at voxl-uvc-server and voxl-configure-uvc
.
As for portal, it's fully dynamic based on which cameras are being published over our mpa services, so if you configure UVC server correctly it'll appear there alongside any other cameras that camera server is publishing (or any overlays coming from VIO/tag detector/etc).
voxl-portal is its own standalone service doing http streaming with mjpeg (not RTSP), it has nothing to do with voxl-streamer, which does our rtsp streaming if you're viewing in VLC/QGC/etc
Sounds like an error occurred, what does journalctl -u voxl-camera-server
say after running the commands?
The frames that go through logger/streamer are the preview stream, to take a snapshot run voxl-send-command hires snapshot <file path>
and it will save a full resolution jpeg snapshot to the filesystem. You can also not provide a file path and it will save to /data/snapshots/
With regards to camera server, the "types" that it takes define behavior more than actual reading of drivers. The underlying camera layer will deal with the actual querying of the sensor and using those drivers that you've posted, and the type in the camera server config file will define a few behaviors like resolutions, image format, allowable streams, etc, some of which are customizable in the config file.
The imx214
type in there is named as such because initially that was the only imx sensor that we used, and its behavior is applicable to (at the very least that we've tested) the 377 and 412 as well; and would likely work on some of the others that you've mentioned.
For streaming over RTSP, please see our docs page on streaming over rtsp
The script voxl-configure-vision-px4
will recreate the config file as it was by default
How are you timing the result, it can be an expensive operation to move the image memory from the cpu to the gpu and back, so it's not uncommon for the entire process to take longer using the gpu if you're copying large quantities of data back and forth even if the main processing you're doing is faster.
Hi,
Can you check in voxl-portal what the pointcloud voxl_pc_out
looks like? this is the VOA pointcloud before the final bit of filtering to send it to the flight controller.
Also check /etc/modalai/extrinsics.conf
to make sure that it matches the transforms that are on your drone.
Also it looks like you've specified a 6x8 board to the calibrator, but your board looks like it has 6x9 interior corners. This technically wouldn't matter too much since it'll find a valid 6x8 within that, but it'll run faster if it's not given 2 very similar solutions to have to choose between.
I'd recommend putting a strong white border around your checkerboard (at least half a square). If you look at mine we've just taped over the outer half of the outside squares with white tape, that also significantly helps the checkerboard detector when there's a well defined outer square.