Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
-
@Prabhav-Gupta , there is a custom
voxl-opencv
package that is probably installed and may conflict with your opencv for ROS installation. https://gitlab.com/voxl-public/voxl-sdk/third-party/voxl-opencvIn order to avoid conflicts like that you may want to consider running your ROS stack inside a standard ubuntu docker image, so that you have full control of the packages that you use. uninstalling
voxl-opencv
may break dependencies for othervoxl-
packages.Alex
-
Hello @Alex-Kushleyev, unfortunately, i did end up uninstalling the
voxl-opencv
and some othervoxl-
packages in an attempt at installing opencv on the native OS.Haven't faced any issues up until now. Will ask them here in case I need some help.
(Just to be clear, ROS+OpenCV still isn't working as expected because of differeing Python version. As it turns out, Melodic uses python2 and I cv_bridge in ROS uses python3).
Thanks a lot!
-
@Prabhav-Gupta , if you are still having issues with ROS+OpenCV, then as I previously suggested, it may be better to go the Docker route and start fresh. You can even grab the official ROS docker images which already have everything installed, including opencv.
Alex
-
Hello @Alex-Kushleyev, I have an update on this (and of course an issue as well).
Current setup:
RB5 Drone running ROS Melodic connected to a wifi network.I am runnnig
roslaunch voxl_mpa_to_ros voxl_mpa_to_ros.launch
to make the topics visible on the network.My laptop (ROS Noetic) connected to the same network with ROS networking set properly. I am able to list out the topics as well on the laptop with
rostopic list
.I wrote a simple script that subscribes to the camera topics and simply displays them on my laptop using opencv.
#!/usr/bin/env python3 # Import the necessary libraries import rospy # Python library for ROS from sensor_msgs.msg import Image # Image is the message type from cv_bridge import CvBridge # Package to convert between ROS and OpenCV Images import cv2 # OpenCV library from ultralytics import YOLO import torch print("Imports done, loading model...") model = YOLO('/home/prabhav/Desktop/IISc/yolov8m.pt') print("Model loaded!") def callback(data): # Used to convert between ROS and OpenCV images #print("Bridge Created") br = CvBridge() # Output debugging information to the terminal rospy.loginfo("receiving video frame") # Convert ROS Image message to OpenCV image #print("Converting imgmsg to cv2") current_frame = br.imgmsg_to_cv2(data) #print("Running model on the image") #results = model.predict(current_frame, conf=0.5) #print(results) # Display image cv2.imshow("camera", current_frame) cv2.waitKey(1) def receive_message(): # Tells rospy the name of the node. # Anonymous = True makes sure the node has a unique name. Random # numbers are added to the end of the name. print("Initialization done") rospy.init_node('video_sub_py', anonymous=True) # Node is subscribing to the video_frames topic print("Subscriber Created") rospy.Subscriber('/hires_small_color', Image, callback) # spin() simply keeps python from exiting until this node is stopped print("Spinning") rospy.spin() # Close down the video stream when done cv2.destroyAllWindows() if __name__ == '__main__': receive_message()
Here is the issue I am facing: When I subscribe to a color image, say
/hires_small_color
, and try to display it using the above code, i get the followingn error:[INFO] [1718900460.696906]: receiving video frame [ERROR] [1718900460.710019]: bad callback: <function callback at 0x7f3c4e6a31f0> Traceback (most recent call last): File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/prabhav/catkin_ws/src/image_subscriber/scripts/image_subscriber.py", line 37, in callback cv2.imshow("camera", current_frame) cv2.error: OpenCV(4.10.0) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0, 2, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]' > Invalid number of channels in input image: > 'VScn::contains(scn)' > where > 'scn' is 2
On going through the error, I understand that for some reason, only two color channels (out of the three required) are present. What I don't get is why this may be happening.
I am able to view the video feed form that camera on the web portal as well normally. (http://<ip>)
At the same time, subscribing to a monochrome image simply to display it does not come up with any errors. The video feed also runs properly with opencv.
Any help would be greatly appreciated.
Thanks!
-
When you run
voxl-inspect-cam -a
, what format is yourhires_small_color
? it is probably YUV (NV12), which has two planes (Y plane and UV plane, not 3 interleaved channels like RGB). If you look at the source code of voxl mpa to ros : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros/-/blob/master/catkin_ws/src/src/interfaces/camera_interface.cpp?ref_type=heads#L100, the YUV NV12 format is published as is, not converted to RGB or BGR.You should look into whether it is possible to convert from NV12 to RGB either in ROS or opencv.
Alternatively you could use the compressed stream (h264/h265) which voxl-mpa-to-ros also publishes and it will save you a lot of network bandwidth. You could adjust the compression quality to get the bit rate you need.
Alex
-
Yes, it is is infact NV12.
I will look into conversion from NV12 to RGB/BGR.
When I try to subscribe to the compressed stream, the subscriber (on the laptop) hangs up and the publisher (drone) gives me this error message:
Interface hires_small_color now publishing Interface hires_small_color ceasing to publish Interface hires_small_encoded now publishing terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc [voxl_mpa_to_ros_node-2] process has died [pid 2872, exit code -6, cmd /opt/ros/melodic/lib/voxl_mpa_to_ros/voxl_mpa_to_ros_node __name:=voxl_mpa_to_ros_node __log:=/home/root/.ros/log/391f97e8-2f20-11ef-bc4c-00037f125fa6/voxl_mpa_to_ros_node-2.log]. log file: /home/root/.ros/log/391f97e8-2f20-11ef-bc4c-00037f125fa6/voxl_mpa_to_ros_node-2*.log
after which I have to restart the publisher on the drone. In the meantime, I will look into the NV12 to RGB conversion.
Thanks a lot for the quick response!
-
@Prabhav-Gupta Hmm, interesting regarding the compressed publishing issue. I have not tried it myself. Perhaps there is a bug there..
Yes, please check if you can do the conversion, but if you are using high resolution images, just keep in mind that streaming them across the network is very slow. You can do some sanity checking to calculate the bandwidth requirements:
- lets assume 1920x1080 YUV (NV12), 30 fps
- 1920 * 1080 * 1.5 = 3110400 (3.1 MB per frame) .
1.5
because there Y plane is full size and UV plane is half size - 30fps will be 93MB/s, which is almost the maximum of a 1Gbit network link.
- if you try to stream in RGB format, the bandwidth requirement will be double that (3 bytes per pixel instead of 1.5)
If you still want to use uncompressed format, you may want to throttle the publishing on the VOXL side (in voxl mpa to ros tools)
-
Thanks a lot! I will look into this.
-
I tried converting the color space as well, but it turns up (essentially) the same error:
[INFO] [1718910969.334643]: receiving video frame [ERROR] [1718910969.365757]: bad callback: <function callback at 0x7fc17b1cf1f0> Traceback (most recent call last): File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/prabhav/catkin_ws/src/image_subscriber/scripts/image_subscriber.py", line 31, in callback rgb_frame = cv2.cvtColor(current_frame, cv2.COLOR_YUV2RGB) cv2.error: OpenCV(4.10.0) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0, 2, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]' > Invalid number of channels in input image: > 'VScn::contains(scn)' > where > 'scn' is 2
Apart from cv2.COLOR_YUV2RGB, I tried YUV2RGB_NV12 as well, and that too comes up with the same error.
Any ideas?
Thanks!
-
@Prabhav-Gupta , Hmm, it is possible that the yuv image is not sent out correctly from mpa to ros. I will need to check with my colleagues.
BTW are you able to view the yuv image using ros tools (using
image_view
) or something like that? just display the image in a window on your remote machine.Alex
-
Yes, I am able to display it on my remote machine using
rosrun rqt_image_view rqt_image_view
, and then selecting the corrosponding topic.I checked OpenCV's documentation yesterday. As far as I can understand, it expects the image in a 2 channel format itself when it is in YUV and "should" be able to convert it to RGB. Seems like this is more of an issue from OpenCV's side. Any suggestions on what I could possibly do?
I refered these links:
https://docs.opencv.org/4.x/de/d25/imgproc_color_conversions.html
https://docs.opencv.org/4.x/d8/d01/group__imgproc__color__conversions.html#gga4e0972be5de079fed4e3a10e24ef5ef0a86027a131a58ef888873fafd4eb603d0Thanks!
-
Hi @Alex-Kushleyev,
Any ideas on what I can do to get the desired output. I did end up asking this question on Stack Overflow as this seemed to be a more OpenCV related question than an issue with the RB5 drone. You may refer to it here.
Any help would be greatly appreciated.
Thanks!
-
@Prabhav-Gupta , yes it seems like OpenCV and ROS YUV_NV12 formats do not match up. I will take a look at it.. it seems the ROS YUV is packed (interleaved) while standard for storing YUV NV12 is having two planes : plane 1 : Y (size: widthheight), plane 2 : UV (size: widthheight/2)
In the meantime.. you can stream a rtsp h264/h265 from VOXL (use decent quality so that image looks good) and use opencv to receive the stream and get decompressed images: https://stackoverflow.com/questions/40875846/capturing-rtsp-camera-using-opencv-python
Would that work for you ? (unfortunately with rtsp stream, you will not get the full image metadata, like exposure, gain, timestamp, etc).
RTSP streaming can be done using
voxl-streamer
, which can accept either a YUV (and encode it) or already encoded h264/5 stream fromvoxl-camera-server
.Alex
-
Here is a link that may provide a clue about the array dimensions for the YUV image :
YUV = Mat(H+H/2, W, CV_8UC1, pointerToData); // YUV Mat initialization
Please check the dimensions of the image of the data in the ROS callback.
Alex
-
Here is another example of converting YUV_NV12 (published via mpa) into RGB. https://gitlab.com/voxl-public/voxl-sdk/services/voxl-tflite-server/-/blob/master/src/inference_helper.cpp?ref_type=heads#L309
However, i should mention, that when published via MPA, the YUV image has two planes (Y plane, which has dimensions of the whole image and UV plane, which has dimensions of width,height/2). The Y and UV planes are not interleaved. Based on the following code, it seems ROS expects the YUV_NV12 to be interleaved Y and UV : https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros/-/blob/master/catkin_ws/src/src/interfaces/camera_interface.cpp?ref_type=heads#L132
So if you are using ROS transport, it seems, you may need to separate the Y and UV planes before converting to RGB / BGR using opencv..
Alex
-
I tried using the
voxl-streamer
and it works with OpenCV imshow atleast. I am able to view the camera feed on my remote machine. Thanks a lot for this suggestion. I believe I can work around this.Thanks!
-
Thanks a lot for these resources. I will look into the links mentioned and get back.
Thanks again!
-
@Prabhav-Gupta , you are welcome!