Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
-
When you run
voxl-inspect-cam -a
, what format is yourhires_small_color
? it is probably YUV (NV12), which has two planes (Y plane and UV plane, not 3 interleaved channels like RGB). If you look at the source code of voxl mpa to ros : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros/-/blob/master/catkin_ws/src/src/interfaces/camera_interface.cpp?ref_type=heads#L100, the YUV NV12 format is published as is, not converted to RGB or BGR.You should look into whether it is possible to convert from NV12 to RGB either in ROS or opencv.
Alternatively you could use the compressed stream (h264/h265) which voxl-mpa-to-ros also publishes and it will save you a lot of network bandwidth. You could adjust the compression quality to get the bit rate you need.
Alex
-
Yes, it is is infact NV12.
I will look into conversion from NV12 to RGB/BGR.
When I try to subscribe to the compressed stream, the subscriber (on the laptop) hangs up and the publisher (drone) gives me this error message:
Interface hires_small_color now publishing Interface hires_small_color ceasing to publish Interface hires_small_encoded now publishing terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc [voxl_mpa_to_ros_node-2] process has died [pid 2872, exit code -6, cmd /opt/ros/melodic/lib/voxl_mpa_to_ros/voxl_mpa_to_ros_node __name:=voxl_mpa_to_ros_node __log:=/home/root/.ros/log/391f97e8-2f20-11ef-bc4c-00037f125fa6/voxl_mpa_to_ros_node-2.log]. log file: /home/root/.ros/log/391f97e8-2f20-11ef-bc4c-00037f125fa6/voxl_mpa_to_ros_node-2*.log
after which I have to restart the publisher on the drone. In the meantime, I will look into the NV12 to RGB conversion.
Thanks a lot for the quick response!
-
@Prabhav-Gupta Hmm, interesting regarding the compressed publishing issue. I have not tried it myself. Perhaps there is a bug there..
Yes, please check if you can do the conversion, but if you are using high resolution images, just keep in mind that streaming them across the network is very slow. You can do some sanity checking to calculate the bandwidth requirements:
- lets assume 1920x1080 YUV (NV12), 30 fps
- 1920 * 1080 * 1.5 = 3110400 (3.1 MB per frame) .
1.5
because there Y plane is full size and UV plane is half size - 30fps will be 93MB/s, which is almost the maximum of a 1Gbit network link.
- if you try to stream in RGB format, the bandwidth requirement will be double that (3 bytes per pixel instead of 1.5)
If you still want to use uncompressed format, you may want to throttle the publishing on the VOXL side (in voxl mpa to ros tools)
-
Thanks a lot! I will look into this.
-
I tried converting the color space as well, but it turns up (essentially) the same error:
[INFO] [1718910969.334643]: receiving video frame [ERROR] [1718910969.365757]: bad callback: <function callback at 0x7fc17b1cf1f0> Traceback (most recent call last): File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/prabhav/catkin_ws/src/image_subscriber/scripts/image_subscriber.py", line 31, in callback rgb_frame = cv2.cvtColor(current_frame, cv2.COLOR_YUV2RGB) cv2.error: OpenCV(4.10.0) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0, 2, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]' > Invalid number of channels in input image: > 'VScn::contains(scn)' > where > 'scn' is 2
Apart from cv2.COLOR_YUV2RGB, I tried YUV2RGB_NV12 as well, and that too comes up with the same error.
Any ideas?
Thanks!
-
@Prabhav-Gupta , Hmm, it is possible that the yuv image is not sent out correctly from mpa to ros. I will need to check with my colleagues.
BTW are you able to view the yuv image using ros tools (using
image_view
) or something like that? just display the image in a window on your remote machine.Alex
-
Yes, I am able to display it on my remote machine using
rosrun rqt_image_view rqt_image_view
, and then selecting the corrosponding topic.I checked OpenCV's documentation yesterday. As far as I can understand, it expects the image in a 2 channel format itself when it is in YUV and "should" be able to convert it to RGB. Seems like this is more of an issue from OpenCV's side. Any suggestions on what I could possibly do?
I refered these links:
https://docs.opencv.org/4.x/de/d25/imgproc_color_conversions.html
https://docs.opencv.org/4.x/d8/d01/group__imgproc__color__conversions.html#gga4e0972be5de079fed4e3a10e24ef5ef0a86027a131a58ef888873fafd4eb603d0Thanks!
-
Hi @Alex-Kushleyev,
Any ideas on what I can do to get the desired output. I did end up asking this question on Stack Overflow as this seemed to be a more OpenCV related question than an issue with the RB5 drone. You may refer to it here.
Any help would be greatly appreciated.
Thanks!
-
@Prabhav-Gupta , yes it seems like OpenCV and ROS YUV_NV12 formats do not match up. I will take a look at it.. it seems the ROS YUV is packed (interleaved) while standard for storing YUV NV12 is having two planes : plane 1 : Y (size: widthheight), plane 2 : UV (size: widthheight/2)
In the meantime.. you can stream a rtsp h264/h265 from VOXL (use decent quality so that image looks good) and use opencv to receive the stream and get decompressed images: https://stackoverflow.com/questions/40875846/capturing-rtsp-camera-using-opencv-python
Would that work for you ? (unfortunately with rtsp stream, you will not get the full image metadata, like exposure, gain, timestamp, etc).
RTSP streaming can be done using
voxl-streamer
, which can accept either a YUV (and encode it) or already encoded h264/5 stream fromvoxl-camera-server
.Alex
-
Here is a link that may provide a clue about the array dimensions for the YUV image :
YUV = Mat(H+H/2, W, CV_8UC1, pointerToData); // YUV Mat initialization
Please check the dimensions of the image of the data in the ROS callback.
Alex
-
Here is another example of converting YUV_NV12 (published via mpa) into RGB. https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/src/inference_helper.cpp?ref_type=heads#L309
However, i should mention, that when published via MPA, the YUV image has two planes (Y plane, which has dimensions of the whole image and UV plane, which has dimensions of width,height/2). The Y and UV planes are not interleaved. Based on the following code, it seems ROS expects the YUV_NV12 to be interleaved Y and UV : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros/-/blob/master/catkin_ws/src/src/interfaces/camera_interface.cpp?ref_type=heads#L132
So if you are using ROS transport, it seems, you may need to separate the Y and UV planes before converting to RGB / BGR using opencv..
Alex
-
I tried using the
voxl-streamer
and it works with OpenCV imshow atleast. I am able to view the camera feed on my remote machine. Thanks a lot for this suggestion. I believe I can work around this.Thanks!
-
Thanks a lot for these resources. I will look into the links mentioned and get back.
Thanks again!
-
@Prabhav-Gupta , you are welcome!