Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
-
I tried converting the color space as well, but it turns up (essentially) the same error:
[INFO] [1718910969.334643]: receiving video frame [ERROR] [1718910969.365757]: bad callback: <function callback at 0x7fc17b1cf1f0> Traceback (most recent call last): File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/prabhav/catkin_ws/src/image_subscriber/scripts/image_subscriber.py", line 31, in callback rgb_frame = cv2.cvtColor(current_frame, cv2.COLOR_YUV2RGB) cv2.error: OpenCV(4.10.0) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0, 2, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]' > Invalid number of channels in input image: > 'VScn::contains(scn)' > where > 'scn' is 2
Apart from cv2.COLOR_YUV2RGB, I tried YUV2RGB_NV12 as well, and that too comes up with the same error.
Any ideas?
Thanks!
-
@Prabhav-Gupta , Hmm, it is possible that the yuv image is not sent out correctly from mpa to ros. I will need to check with my colleagues.
BTW are you able to view the yuv image using ros tools (using
image_view
) or something like that? just display the image in a window on your remote machine.Alex
-
Yes, I am able to display it on my remote machine using
rosrun rqt_image_view rqt_image_view
, and then selecting the corrosponding topic.I checked OpenCV's documentation yesterday. As far as I can understand, it expects the image in a 2 channel format itself when it is in YUV and "should" be able to convert it to RGB. Seems like this is more of an issue from OpenCV's side. Any suggestions on what I could possibly do?
I refered these links:
https://docs.opencv.org/4.x/de/d25/imgproc_color_conversions.html
https://docs.opencv.org/4.x/d8/d01/group__imgproc__color__conversions.html#gga4e0972be5de079fed4e3a10e24ef5ef0a86027a131a58ef888873fafd4eb603d0Thanks!
-
Hi @Alex-Kushleyev,
Any ideas on what I can do to get the desired output. I did end up asking this question on Stack Overflow as this seemed to be a more OpenCV related question than an issue with the RB5 drone. You may refer to it here.
Any help would be greatly appreciated.
Thanks!
-
@Prabhav-Gupta , yes it seems like OpenCV and ROS YUV_NV12 formats do not match up. I will take a look at it.. it seems the ROS YUV is packed (interleaved) while standard for storing YUV NV12 is having two planes : plane 1 : Y (size: widthheight), plane 2 : UV (size: widthheight/2)
In the meantime.. you can stream a rtsp h264/h265 from VOXL (use decent quality so that image looks good) and use opencv to receive the stream and get decompressed images: https://stackoverflow.com/questions/40875846/capturing-rtsp-camera-using-opencv-python
Would that work for you ? (unfortunately with rtsp stream, you will not get the full image metadata, like exposure, gain, timestamp, etc).
RTSP streaming can be done using
voxl-streamer
, which can accept either a YUV (and encode it) or already encoded h264/5 stream fromvoxl-camera-server
.Alex
-
Here is a link that may provide a clue about the array dimensions for the YUV image :
YUV = Mat(H+H/2, W, CV_8UC1, pointerToData); // YUV Mat initialization
Please check the dimensions of the image of the data in the ROS callback.
Alex
-
Here is another example of converting YUV_NV12 (published via mpa) into RGB. https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/src/inference_helper.cpp?ref_type=heads#L309
However, i should mention, that when published via MPA, the YUV image has two planes (Y plane, which has dimensions of the whole image and UV plane, which has dimensions of width,height/2). The Y and UV planes are not interleaved. Based on the following code, it seems ROS expects the YUV_NV12 to be interleaved Y and UV : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros/-/blob/master/catkin_ws/src/src/interfaces/camera_interface.cpp?ref_type=heads#L132
So if you are using ROS transport, it seems, you may need to separate the Y and UV planes before converting to RGB / BGR using opencv..
Alex
-
I tried using the
voxl-streamer
and it works with OpenCV imshow atleast. I am able to view the camera feed on my remote machine. Thanks a lot for this suggestion. I believe I can work around this.Thanks!
-
Thanks a lot for these resources. I will look into the links mentioned and get back.
Thanks again!
-
@Prabhav-Gupta , you are welcome!