ModalAI Forum
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Install libopencv on Starling 2

    VOXL 2
    starlingv2 ros opencv docker
    4
    19
    838
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • J
      Judoor 0 @wilkinsaf
      last edited by

      @wilkinsaf Okay, thanks for your reply. Sorry, that was not clear. What I mean by data for cameras is data that I can send to RTAB-Map as a sensor_msgs/CameraInfo message. There is some data in the extrinsics files for tracking, but not for other sensors.

      wilkinsafW 1 Reply Last reply Reply Quote 0
      • wilkinsafW
        wilkinsaf ModalAI Team @Judoor 0
        last edited by

        @Judoor-0 Gotcha. Let me know if this doesnt answer your question.

        you can use voxl mpa to ros2: https://gitlab.com/voxl-public/voxl-sdk/utilities/voxl-mpa-to-ros2
        and this will allow you to access camera frame information

        J 1 Reply Last reply Reply Quote 0
        • J
          Judoor 0 @wilkinsaf
          last edited by

          @wilkinsaf Does mpa to ros do the same thing? Because I currently use ROS1. And for ROS1, mpa to ros only sends camera images or point cloud, not camera_infos.

          P 1 Reply Last reply Reply Quote 0
          • P
            Prabhav Gupta @Judoor 0
            last edited by

            @Judoor-0 @wilkinsaf

            I am working on something very similar as well and am facing the same issue.

            I need to run RTAB-MAP for SLAM using the drone's sensors (Video and IMU). But RTAB-MAP expects camera_info (the camera matrix and other variables, as explained by @Judoor-0 above) along with video data.

            One way to provide this information to RTAB-MAP is by creating a seperate publisher and publish this data over a ROS topic as done in one of the posts above. But I suspect it would be better if the voxl-mpa-to-ros package could provide the exact camera matrix and other variables (as specified here - https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CameraInfo.html) as a ROS topic directly.

            Kindly let us know if there is any way to implement this.

            @Judoor-0, were you able to run SLAM using the stereo data from the drone? Or did you use depth data from a depth camera? The RB5 flight drone that I am using does not provide depth data directly, and hence I was thinking of using the Stereo data package within RTAB-MAP.

            Thanks!

            Alex KushleyevA J 2 Replies Last reply Reply Quote 0
            • Alex KushleyevA
              Alex Kushleyev ModalAI Team @Prabhav Gupta
              last edited by

              @Prabhav-Gupta,

              The easiest way to do this, perhaps, is to create a python script that reads the camera calibration json and publishes a latched message via rospy. Similarly it could be done in C++ with a little more effort.. I don't think this currently exists.

              Alex

              P 2 Replies Last reply Reply Quote 0
              • P
                Prabhav Gupta @Alex Kushleyev
                last edited by

                @Alex-Kushleyev

                Yes, that is similar to what I had in mind. I will try this out.

                Thanks!

                1 Reply Last reply Reply Quote 0
                • J
                  Judoor 0 @Prabhav Gupta
                  last edited by

                  @Prabhav-Gupta I didn't use the stereo data because I don't have one on my drone. I used the point cloud returned by the TOF. But I know that with RTAB-Map you can easily use stereo without depth. There are some examples that may help you in the rtabmap_ros website.

                  P 1 Reply Last reply Reply Quote 0
                  • P
                    Prabhav Gupta @Judoor 0
                    last edited by

                    @Judoor-0 Thanks a lot, I will look into it.

                    1 Reply Last reply Reply Quote 0
                    • P
                      Prabhav Gupta @Alex Kushleyev
                      last edited by Prabhav Gupta

                      @Alex-Kushleyev

                      I was trying to create this script -

                      @Alex-Kushleyev said in Install libopencv on Starling 2:

                      The easiest way to do this, perhaps, is to create a python script that reads the camera calibration json and publishes a latched message via rospy. Similarly it could be done in C++ with a little more effort.. I don't think this currently exists.

                      and need some help.

                      I am working with the RB5 drone and found this in the /data/modalai/ folder in the drone:

                      In opencv_stereo_front_extrinsics.yml:

                      rb5:/data/modalai$ cat opencv_stereo_front_extrinsics.yml                                                                                                                                                            
                      %YAML:1.0                                                                                                                                                                                                            
                      ---                                                                                                                                                                                                                  
                      R: !!opencv-matrix                                                                                                                                                                                                   
                         rows: 3                                                                                                                                                                                                           
                         cols: 3                                                                                                                                                                                                           
                         dt: d                                                                                                                                                                                                             
                         data: [ 9.9995382093423246e-01, -9.1903872613942166e-03,
                             -2.8094093711296423e-03, 9.2509466314868449e-03,
                             9.9970714036476482e-01, 2.2361875818584419e-02,
                             2.6030723098620125e-03, -2.2386832864208624e-02,
                             9.9974599460505953e-01 ]
                      T: !!opencv-matrix
                         rows: 3
                         cols: 1
                         dt: d
                         data: [ -7.9639069991921108e-02, -9.0666624431155548e-05,
                             -1.0799460870486485e-03 ]
                      reprojection_error: 2.7322523335815047e-01
                      calibration_time: "2022-04-08 23:15:23"
                      

                      and in opencv_stereo_front_intrinsics.yml:

                      rb5:/data/modalai$ cat opencv_stereo_front_intrinsics.yml
                      %YAML:1.0
                      ---
                      M1: !!opencv-matrix
                         rows: 3
                         cols: 3
                         dt: d
                         data: [ 5.0159907997903338e+02, 0., 2.9377341710319376e+02, 0.,
                             5.0083699409439379e+02, 2.3544444409742863e+02, 0., 0., 1. ]
                      D1: !!opencv-matrix
                         rows: 5
                         cols: 1
                         dt: d
                         data: [ -1.6571748643440132e-01, 6.3134583515870882e-02,
                             2.4908601395800438e-03, 6.9258577723375913e-04, 0. ]
                      reprojection_error1: 1.8877343672847582e-01
                      M2: !!opencv-matrix
                         rows: 3
                         cols: 3
                         dt: d
                         data: [ 5.0289492433892644e+02, 0., 3.1156572782508289e+02, 0.,
                             5.0234014337071841e+02, 2.4962793784523797e+02, 0., 0., 1. ]
                      D2: !!opencv-matrix
                         rows: 5
                         cols: 1
                         dt: d
                         data: [ -1.6640627389329365e-01, 6.4800083011513396e-02,
                             1.1988146735987267e-04, -6.3680006718804514e-04, 0. ]
                      reprojection_error2: 1.8906708149286014e-01
                      width: 640
                      height: 480
                      distortion_model: plumb_bob
                      calibration_time: "2022-04-08 23:15:23"
                      

                      To create the script that publishes CameraInfo, I need to figure out the information within these two files that correspond to the variables within the CameraInfo class as given here: https://docs.ros.org/en/noetic/api/sensor_msgs/html/msg/CameraInfo.html

                      sensor_msgs/CameraInfo:

                      std_msgs/Header header
                      uint32 height
                      uint32 width
                      string distortion_model
                      float64[] D
                      float64[9] K
                      float64[9] R
                      float64[12] P
                      uint32 binning_x
                      uint32 binning_y
                      sensor_msgs/RegionOfInterest roi
                      

                      Some of the variable mappings are clear, such as height, width, distortion model, R and D.

                      How do I map K and P required by CameraInfo from the Calibration files? And anything else that I should potentially look out for while doing this mapping?

                      Thanks!

                      Alex KushleyevA 1 Reply Last reply Reply Quote 0
                      • Alex KushleyevA
                        Alex Kushleyev ModalAI Team @Prabhav Gupta
                        last edited by

                        @Prabhav-Gupta ,

                        K is your M1 and M2 matrices for both cameras (3x3 Intrinsic camera matrix)

                        The link you shared has information how to compute the P matrix. I am not sure if it would be used by your application though.

                        Alex

                        P 1 Reply Last reply Reply Quote 0
                        • P
                          Prabhav Gupta @Alex Kushleyev
                          last edited by

                          @Alex-Kushleyev,

                          Cool, Thanks a lot!

                          1 Reply Last reply Reply Quote 0
                          • First post
                            Last post
                          Powered by NodeBB | Contributors