Cool, Thanks a lot!
Posts made by Prabhav Gupta
-
RE: Install libopencv on Starling 2
I was trying to create this script -
@Alex-Kushleyev said in Install libopencv on Starling 2:
The easiest way to do this, perhaps, is to create a python script that reads the camera calibration json and publishes a latched message via rospy. Similarly it could be done in C++ with a little more effort.. I don't think this currently exists.
and need some help.
I am working with the RB5 drone and found this in the
/data/modalai/
folder in the drone:In
opencv_stereo_front_extrinsics.yml
:rb5:/data/modalai$ cat opencv_stereo_front_extrinsics.yml %YAML:1.0 --- R: !!opencv-matrix rows: 3 cols: 3 dt: d data: [ 9.9995382093423246e-01, -9.1903872613942166e-03, -2.8094093711296423e-03, 9.2509466314868449e-03, 9.9970714036476482e-01, 2.2361875818584419e-02, 2.6030723098620125e-03, -2.2386832864208624e-02, 9.9974599460505953e-01 ] T: !!opencv-matrix rows: 3 cols: 1 dt: d data: [ -7.9639069991921108e-02, -9.0666624431155548e-05, -1.0799460870486485e-03 ] reprojection_error: 2.7322523335815047e-01 calibration_time: "2022-04-08 23:15:23"
and in
opencv_stereo_front_intrinsics.yml
:rb5:/data/modalai$ cat opencv_stereo_front_intrinsics.yml %YAML:1.0 --- M1: !!opencv-matrix rows: 3 cols: 3 dt: d data: [ 5.0159907997903338e+02, 0., 2.9377341710319376e+02, 0., 5.0083699409439379e+02, 2.3544444409742863e+02, 0., 0., 1. ] D1: !!opencv-matrix rows: 5 cols: 1 dt: d data: [ -1.6571748643440132e-01, 6.3134583515870882e-02, 2.4908601395800438e-03, 6.9258577723375913e-04, 0. ] reprojection_error1: 1.8877343672847582e-01 M2: !!opencv-matrix rows: 3 cols: 3 dt: d data: [ 5.0289492433892644e+02, 0., 3.1156572782508289e+02, 0., 5.0234014337071841e+02, 2.4962793784523797e+02, 0., 0., 1. ] D2: !!opencv-matrix rows: 5 cols: 1 dt: d data: [ -1.6640627389329365e-01, 6.4800083011513396e-02, 1.1988146735987267e-04, -6.3680006718804514e-04, 0. ] reprojection_error2: 1.8906708149286014e-01 width: 640 height: 480 distortion_model: plumb_bob calibration_time: "2022-04-08 23:15:23"
To create the script that publishes CameraInfo, I need to figure out the information within these two files that correspond to the variables within the CameraInfo class as given here: https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CameraInfo.html
sensor_msgs/CameraInfo:
std_msgs/Header header uint32 height uint32 width string distortion_model float64[] D float64[9] K float64[9] R float64[12] P uint32 binning_x uint32 binning_y sensor_msgs/RegionOfInterest roi
Some of the variable mappings are clear, such as height, width, distortion model, R and D.
How do I map K and P required by CameraInfo from the Calibration files? And anything else that I should potentially look out for while doing this mapping?
Thanks!
-
RE: Unable to build ROS python workspace on the RB5 drone
I had backed up my ROS workspace using
adb pull
before flashing the drone and usedadb push
to reinstate my workspace on the drone. This unfortunately changed up the file permissions for the folders within the workspace.I was able to fix this by deleting (using
rm -rf
) all folders within my ROS workspace except thesrc
(which contains the packages I was working on) and building the packages again.I was then faced with the other CMake error visible above -
@Prabhav-Gupta said in Unable to build ROS python workspace on the RB5 drone:
CMake Error at /home/root/python_ws/src/vision_opencv/cv_bridge/CMakeLists.txt:16 (find_package):
Could not find a package configuration file provided by "OpenCV" (requested
version 3) with any of the following names:OpenCVConfig.cmake opencv-config.cmake
I fixed this by building opencv from source on the drone (following this - https://docs.opencv.org/4.x/d7/d9f/tutorial_linux_install.html)
After installing opencv, the
catrkin build
did not return any errors at the time. I do need to check whether my ROS Nodes are working as expected or not. Will update here once I do the same.Thank you for your time!
-
Unable to build ROS python workspace on the RB5 drone
Hi,
I am working on a RB5 Drone. I had setup a Python ROS workspace (ROS Melodic) and it was working okay up until a few days ago.
I had to upgrade the SDK on the drone to the latest available. After that, I have been facing issues with building the workspace.
This is the error I am facing while running
catkin build
:rb5:~/python_ws$ catkin build obj_det /bin/sh: /home/root/python_ws/install/env.sh: Permission denied WARNING: Sourced environment from `/home/root/python_ws/install/env.sh` has no environment variables. Something is wrong. ---------------------------------------------------------------------------------------------------------- Profile: default Extending: [env] /home/root/catkin_ws/devel:/opt/ros/melodic Workspace: /home/root/python_ws ---------------------------------------------------------------------------------------------------------- Build Space: [exists] /home/root/python_ws/build Devel Space: [exists] /home/root/python_ws/devel Install Space: [exists] /home/root/python_ws/install Log Space: [exists] /home/root/python_ws/logs Source Space: [exists] /home/root/python_ws/src DESTDIR: [unused] None ---------------------------------------------------------------------------------------------------------- Devel Space Layout: linked Install Space Layout: merged ---------------------------------------------------------------------------------------------------------- Additional CMake Args: -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libpython3.6m.so Additional Make Args: None Additional catkin Make Args: None Internal Make Job Server: True Cache Job Environments: False ---------------------------------------------------------------------------------------------------------- Whitelisted Packages: None Blacklisted Packages: None ---------------------------------------------------------------------------------------------------------- Workspace configuration appears valid. ---------------------------------------------------------------------------------------------------------- [build] Found '5' packages in 0.0 seconds. Starting >>> cv_bridge /bin/sh: /home/root/python_ws/install/env.sh: Permission denied WARNING: Sourced environment from `/home/root/python_ws/install/env.sh` has no environment variables. Something is wrong. _________________________________________________________________________________________________________ Errors << cv_bridge:check /home/root/python_ws/logs/cv_bridge/build.check.019.log CMake Warning at /usr/share/cmake-3.10/Modules/FindBoost.cmake:1626 (message): No header defined for python3; skipping header check Call Stack (most recent call first): CMakeLists.txt:11 (find_package) CMake Error at /home/root/python_ws/src/vision_opencv/cv_bridge/CMakeLists.txt:16 (find_package): Could not find a package configuration file provided by "OpenCV" (requested version 3) with any of the following names: OpenCVConfig.cmake opencv-config.cmake Add the installation prefix of "OpenCV" to CMAKE_PREFIX_PATH or set "OpenCV_DIR" to a directory containing one of the above files. If "OpenCV" provides a separate development package or SDK, be sure it has been installed. make: *** [cmake_check_build_system] Error 1 cd /home/root/python_ws/build/cv_bridge; catkin build --get-env cv_bridge | catkin env -si /usr/bin/make cmake_check_build_system; cd - ......................................................................................................... Failed << cv_bridge:check [ Exited with code 2 ] Failed <<< cv_bridge [ 1.5 seconds ] Abandoned <<< obj_det [ Unrelated job failed ] [build] Summary: 0 of 2 packages succeeded. [build] Ignored: 3 packages were skipped or are blacklisted. [build] Warnings: None. [build] Abandoned: 1 packages were abandoned. [build] Failed: 1 packages failed. [build] Runtime: 1.6 seconds total.
I set this workspace up by following this tutorial on the web: https://medium.com/@beta_b0t/how-to-setup-ros-with-python-3-44a69ca36674
Any suggestions on what is going wrong here?
Thanks a lot!
-
unable to calibrate imu on RB5
Re: voxl-imu-server is Not Running
I am facing a similar issue as in the above topic. The IMU server on my RB5 is enabled, but not running.
I recently upgraded the sdk version on the drone, and hence need to calibrate the IMU again.
systemctl start voxl-imu-server
andsystemctl start voxl-px4-imu-server
both do not work as expected.Any help would be appreciated. Thanks!
-
RE: Install libopencv on Starling 2
I am working on something very similar as well and am facing the same issue.
I need to run RTAB-MAP for SLAM using the drone's sensors (Video and IMU). But RTAB-MAP expects camera_info (the camera matrix and other variables, as explained by @Judoor-0 above) along with video data.
One way to provide this information to RTAB-MAP is by creating a seperate publisher and publish this data over a ROS topic as done in one of the posts above. But I suspect it would be better if the voxl-mpa-to-ros package could provide the exact camera matrix and other variables (as specified here - https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CameraInfo.html) as a ROS topic directly.
Kindly let us know if there is any way to implement this.
@Judoor-0, were you able to run SLAM using the stereo data from the drone? Or did you use depth data from a depth camera? The RB5 flight drone that I am using does not provide depth data directly, and hence I was thinking of using the Stereo data package within RTAB-MAP.
Thanks!
-
RE: Rename ROS Topics pulished by the RB5
@Moderator Thanks a lot, I will look into this!
-
Rename ROS Topics pulished by the RB5
Hello,
My setup includes a RB5 drone, connected to a local wifi network, running the voxl_mpa_to_ros node. This enables my laptop connected on the same network to find and subscribe to the same topics remotely.
I would like to implement my own SLAM (such as RTAB-MAP or ORB-SLAM3) based on the image and imu data received from the drone. This requires me to change the name on the ROS topics published by the drone to ones expected by the SLAM node.
For example, in case of ORB-SLAM (https://github.com/thien94/orb_slam3_ros?tab=readme-ov-file#4-ros-topics-params-and-services) -
Expected topics:
/camera/image_raw
for Mono(-Inertial) node
/camera/left/image_raw
for Stereo(-Inertial) node
/camera/right/image_raw
for Stereo(-Inertial) node
/imu
for Mono/Stereo/RGBD-Inertial node
/camera/rgb/image_raw
and/camera/depth_registered/image_raw
for RGBD nodeActual topic names:
/hires_large_color /hires_large_encoded /hires_large_grey /hires_small_color /hires_small_encoded /hires_small_grey /hires_snapshot /imu_px4 /qvio/odometry /qvio/pose /qvio_overlay /rosout /rosout_agg /stereo_front/left /stereo_front/right /stereo_rear/left /stereo_rear/right /tracking
Any help would be greatly appreciated.
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
Thanks a lot for these resources. I will look into the links mentioned and get back.
Thanks again!
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
I tried using the
voxl-streamer
and it works with OpenCV imshow atleast. I am able to view the camera feed on my remote machine. Thanks a lot for this suggestion. I believe I can work around this.Thanks!
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
Hi @Alex-Kushleyev,
Any ideas on what I can do to get the desired output. I did end up asking this question on Stack Overflow as this seemed to be a more OpenCV related question than an issue with the RB5 drone. You may refer to it here.
Any help would be greatly appreciated.
Thanks!
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
Yes, I am able to display it on my remote machine using
rosrun rqt_image_view rqt_image_view
, and then selecting the corrosponding topic.I checked OpenCV's documentation yesterday. As far as I can understand, it expects the image in a 2 channel format itself when it is in YUV and "should" be able to convert it to RGB. Seems like this is more of an issue from OpenCV's side. Any suggestions on what I could possibly do?
I refered these links:
https://docs.opencv.org/4.x/de/d25/imgproc_color_conversions.html
https://docs.opencv.org/4.x/d8/d01/group__imgproc__color__conversions.html#gga4e0972be5de079fed4e3a10e24ef5ef0a86027a131a58ef888873fafd4eb603d0Thanks!
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
I tried converting the color space as well, but it turns up (essentially) the same error:
[INFO] [1718910969.334643]: receiving video frame [ERROR] [1718910969.365757]: bad callback: <function callback at 0x7fc17b1cf1f0> Traceback (most recent call last): File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/prabhav/catkin_ws/src/image_subscriber/scripts/image_subscriber.py", line 31, in callback rgb_frame = cv2.cvtColor(current_frame, cv2.COLOR_YUV2RGB) cv2.error: OpenCV(4.10.0) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0, 2, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]' > Invalid number of channels in input image: > 'VScn::contains(scn)' > where > 'scn' is 2
Apart from cv2.COLOR_YUV2RGB, I tried YUV2RGB_NV12 as well, and that too comes up with the same error.
Any ideas?
Thanks!
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
Thanks a lot! I will look into this.
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
Yes, it is is infact NV12.
I will look into conversion from NV12 to RGB/BGR.
When I try to subscribe to the compressed stream, the subscriber (on the laptop) hangs up and the publisher (drone) gives me this error message:
Interface hires_small_color now publishing Interface hires_small_color ceasing to publish Interface hires_small_encoded now publishing terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc [voxl_mpa_to_ros_node-2] process has died [pid 2872, exit code -6, cmd /opt/ros/melodic/lib/voxl_mpa_to_ros/voxl_mpa_to_ros_node __name:=voxl_mpa_to_ros_node __log:=/home/root/.ros/log/391f97e8-2f20-11ef-bc4c-00037f125fa6/voxl_mpa_to_ros_node-2.log]. log file: /home/root/.ros/log/391f97e8-2f20-11ef-bc4c-00037f125fa6/voxl_mpa_to_ros_node-2*.log
after which I have to restart the publisher on the drone. In the meantime, I will look into the NV12 to RGB conversion.
Thanks a lot for the quick response!
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
Hello @Alex-Kushleyev, I have an update on this (and of course an issue as well).
Current setup:
RB5 Drone running ROS Melodic connected to a wifi network.I am runnnig
roslaunch voxl_mpa_to_ros voxl_mpa_to_ros.launch
to make the topics visible on the network.My laptop (ROS Noetic) connected to the same network with ROS networking set properly. I am able to list out the topics as well on the laptop with
rostopic list
.I wrote a simple script that subscribes to the camera topics and simply displays them on my laptop using opencv.
#!/usr/bin/env python3 # Import the necessary libraries import rospy # Python library for ROS from sensor_msgs.msg import Image # Image is the message type from cv_bridge import CvBridge # Package to convert between ROS and OpenCV Images import cv2 # OpenCV library from ultralytics import YOLO import torch print("Imports done, loading model...") model = YOLO('/home/prabhav/Desktop/IISc/yolov8m.pt') print("Model loaded!") def callback(data): # Used to convert between ROS and OpenCV images #print("Bridge Created") br = CvBridge() # Output debugging information to the terminal rospy.loginfo("receiving video frame") # Convert ROS Image message to OpenCV image #print("Converting imgmsg to cv2") current_frame = br.imgmsg_to_cv2(data) #print("Running model on the image") #results = model.predict(current_frame, conf=0.5) #print(results) # Display image cv2.imshow("camera", current_frame) cv2.waitKey(1) def receive_message(): # Tells rospy the name of the node. # Anonymous = True makes sure the node has a unique name. Random # numbers are added to the end of the name. print("Initialization done") rospy.init_node('video_sub_py', anonymous=True) # Node is subscribing to the video_frames topic print("Subscriber Created") rospy.Subscriber('/hires_small_color', Image, callback) # spin() simply keeps python from exiting until this node is stopped print("Spinning") rospy.spin() # Close down the video stream when done cv2.destroyAllWindows() if __name__ == '__main__': receive_message()
Here is the issue I am facing: When I subscribe to a color image, say
/hires_small_color
, and try to display it using the above code, i get the followingn error:[INFO] [1718900460.696906]: receiving video frame [ERROR] [1718900460.710019]: bad callback: <function callback at 0x7f3c4e6a31f0> Traceback (most recent call last): File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 750, in _invoke_callback cb(msg) File "/home/prabhav/catkin_ws/src/image_subscriber/scripts/image_subscriber.py", line 37, in callback cv2.imshow("camera", current_frame) cv2.error: OpenCV(4.10.0) /io/opencv/modules/imgproc/src/color.simd_helpers.hpp:92: error: (-2:Unspecified error) in function 'cv::impl::{anonymous}::CvtHelper<VScn, VDcn, VDepth, sizePolicy>::CvtHelper(cv::InputArray, cv::OutputArray, int) [with VScn = cv::impl::{anonymous}::Set<3, 4>; VDcn = cv::impl::{anonymous}::Set<3, 4>; VDepth = cv::impl::{anonymous}::Set<0, 2, 5>; cv::impl::{anonymous}::SizePolicy sizePolicy = cv::impl::<unnamed>::NONE; cv::InputArray = const cv::_InputArray&; cv::OutputArray = const cv::_OutputArray&]' > Invalid number of channels in input image: > 'VScn::contains(scn)' > where > 'scn' is 2
On going through the error, I understand that for some reason, only two color channels (out of the three required) are present. What I don't get is why this may be happening.
I am able to view the video feed form that camera on the web portal as well normally. (http://<ip>)
At the same time, subscribing to a monochrome image simply to display it does not come up with any errors. The video feed also runs properly with opencv.
Any help would be greatly appreciated.
Thanks!
-
RE: Onboard Image Processing using ROS + OpenCV (+ possibly some ML library in the future)
Hello @Alex-Kushleyev, unfortunately, i did end up uninstalling the
voxl-opencv
and some othervoxl-
packages in an attempt at installing opencv on the native OS.Haven't faced any issues up until now. Will ask them here in case I need some help.
(Just to be clear, ROS+OpenCV still isn't working as expected because of differeing Python version. As it turns out, Melodic uses python2 and I cv_bridge in ROS uses python3).
Thanks a lot!