日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

常用摄像头一些点云深度矫正ROS程序(ZED kinect v2 D415)

發(fā)布時間:2023/12/20 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 常用摄像头一些点云深度矫正ROS程序(ZED kinect v2 D415) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

ZED攝像頭

獲得中心點深度,未考慮RGB與深度映射(可參考下面D415)

#include <iostream> #include <fstream> #include <sstream> #include <algorithm> #include <dirent.h> #include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp> #include <ros/ros.h> #include <sensor_msgs/Image.h> #include <cv_bridge/cv_bridge.h> #include <image_transport/image_transport.h> #include <sensor_msgs/image_encodings.h> #include <opencv2/imgproc/imgproc.hpp>void depthCallback(const sensor_msgs::Image::ConstPtr& msg) {// Get a pointer to the depth values casting the data// pointer to floating pointfloat* depths = (float*)(&msg->data[0]);// Image coordinates of the center pixelint u = msg->width / 2;int v = msg->height / 2;// Linear index of the center pixelint centerIdx = u + msg->width * v;// Output the measureROS_INFO("Center distance : %g m", depths[centerIdx]); } static const std::string OPENCV_WINDOW = "Image window"; class ImageConverter {ros::NodeHandle nh_;image_transport::ImageTransport it_;image_transport::Subscriber image_sub_;image_transport::Publisher image_pub_; public:ImageConverter(): it_(nh_){ // Subscrive to input video feed and publish output video feedimage_sub_ = it_.subscribe("/zed/right/image_raw_color", 1,&ImageConverter::imageCb, this);image_pub_ = it_.advertise("/image_converter/output_video", 1);cv::namedWindow(OPENCV_WINDOW);}~ImageConverter(){cv::destroyWindow(OPENCV_WINDOW);}void imageCb(const sensor_msgs::ImageConstPtr& msg){cv_bridge::CvImagePtr cv_ptr;try{cv_ptr = cv_bridge::toCvCopy(msg, sensor_msgs::image_encodings::BGR8);}catch (cv_bridge::Exception& e){ROS_ERROR("cv_bridge exception: %s", e.what());return;}// Draw an example circle on the video streamif (cv_ptr->image.rows > 60 && cv_ptr->image.cols > 60)cv::circle(cv_ptr->image, cv::Point(50, 50), 10, CV_RGB(255,0,0));// Update GUI Windowcv::imshow(OPENCV_WINDOW, cv_ptr->image);cv::waitKey(3);// Output modified video streamimage_pub_.publish(cv_ptr->toImageMsg());} }; //blog.csdn.net/qq_29985391/article/details/80247724 int main(int argc, char** argv) {ros::init(argc, argv, "zed_video_subscriber");/** NodeHandle is the main access point to communications with the ROS system.* The first NodeHandle constructed will fully initialize this node, and the last* NodeHandle destructed will close down the node.*/ros::NodeHandle n;ros::Subscriber subDepth = n.subscribe("/zed/depth/depth_registered", 10, depthCallback);//--只需要聲明一個對象即可------------------ImageConverter ic;//定義一個對象//----------------------------------------ros::spin();return 0; }

CMakeList.txt書寫如下:

cmake_minimum_required(VERSION 2.8.3) project(zed_depth_sub_tutorial)# if CMAKE_BUILD_TYPE is not specified, take 'Release' as default IF(NOT CMAKE_BUILD_TYPE)SET(CMAKE_BUILD_TYPE Release) ENDIF(NOT CMAKE_BUILD_TYPE)## Find catkin and any catkin packagesfind_package(catkin REQUIRED COMPONENTScv_bridgeimage_transportroscppsensor_msgsstd_msgsgeometry_msgs ) find_package(OpenCV REQUIRED)if(NOT WIN32) ADD_DEFINITIONS("-std=c++0x -O3") endif(NOT WIN32)## Declare a catkin packagecatkin_package()include_directories(include ${catkin_INCLUDE_DIRS}) include_directories ( ${OpenCV_INCLUDE_DIRS} )## Build add_executable(zed_depth_sub src/zed_depth_sub_tutorial.cpp) ##target_link_libraries(zed_depth_sub ${catkin_LIBRARIES}) target_link_libraries( zed_depth_sub ${OpenCV_LIBRARIES} ${OpenCV_LIBS} ${catkin_LIBRARIES})

xml文件

<package><name>zed_depth_sub_tutorial</name><version>2.7.0</version><description>This package is a tutorial showing how to subscribe to ZED depth streams</description><maintainer email="support@stereolabs.com">STEREOLABS</maintainer><license>MIT</license><url type="website">http://stereolabs.com/</url><url type="repository">https://github.com/stereolabs/zed-ros-wrapper</url><url type="bugtracker">https://github.com/stereolabs/zed-ros-wrapper/issues</url><buildtool_depend>catkin</buildtool_depend><build_depend>cv_bridge</build_depend><build_depend>image_transport</build_depend><build_depend>opencv2</build_depend><build_depend>roscpp</build_depend><build_depend>sensor_msgs</build_depend><build_depend>std_msgs</build_depend><run_depend>cv_bridge</run_depend><run_depend>image_transport</run_depend><run_depend>opencv2</run_depend><run_depend>roscpp</run_depend><run_depend>sensor_msgs</run_depend><run_depend>std_msgs</run_depend></package>

報錯了,怎是編譯不了,這時如果大家用Qt進(jìn)行編譯發(fā)現(xiàn)報錯,但是不指明在哪里報錯,這時候可以到工作空間下,(我的工作空間是open_cv_ws),進(jìn)行$catkin_make,這時候上面顯示 format="2"應(yīng)該去掉,去掉之后重新編譯一下就可以了。

總結(jié)

話題/zed/right/image_raw_color可訂閱ZED顏色圖像,寫在一個類里面,主程序直接定義一個對象就可以了
知道怎么去添加依賴,在CMakeList.txt中添加find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport roscpp sensor_msgs std_msgs geometry_msgs ) find_package(OpenCV REQUIRED)

Kinect v2

保存圖像和深度圖序列

ROS

做機器人相關(guān)的工作的同學(xué), 同時對該部分也不會陌生吧。 機器人操作系統(tǒng)(ROS), 應(yīng)用非常廣泛, 并且有很多開源庫, 包可以使用。 同時, 主流的傳感器在ROS中也都有支持。 當(dāng)然, Kinect2也是能夠支持的。 ROS安裝于Ubuntu之上, 我使用的版本是Ubuntu14.04 + ROS indigo。 關(guān)于ROS的安裝問題, 可以查看官網(wǎng)相關(guān)指導(dǎo) 。 很簡單的步驟。 依次輸入下列命令:

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net --recv-key 0xB01FA116sudo apt-get updatesudo apt-get install ros-indigo-desktop-fullsudo rosdep initrosdep updateecho "source /opt/ros/indigo/setup.bash" >> ~/.bashrcsource ~/.bashrcsudo apt-get install python-rosinstall

上述指令執(zhí)行完畢之后, ROS也就安裝完成了。 當(dāng)然, 緊接著還需要建立自己的工作空間。 執(zhí)行下述代碼:

mkdir -p ~/catkin_ws/src cd ~/catkin_ws/src catkin_init_workspace cd .. catkin_make

驅(qū)動節(jié)點配置

在ROS環(huán)境里使用Kinect2, 主要依靠iai-kinect2這個包。 Github地址: https://github.com/code-iai/iai_kinect2 。

首先, 需要安裝其libfreenect2, 步驟如下(以下默認(rèn)以ubuntu14.04或更新的版本是系統(tǒng)版本, 別的系統(tǒng), 參見原始網(wǎng)站說明):

libfreenect2

下載 libfreenect2 源碼

git clone https://github.com/OpenKinect/libfreenect2.git cd libfreenect2

下載upgrade deb 文件

cd depends; ./download_debs_trusty.sh

安裝編譯工具

sudo apt-get install build-essential cmake pkg-config

安裝 libusb. 版本需求 >= 1.0.20.

sudo dpkg -i debs/libusb*deb

安裝 TurboJPEG

sudo apt-get install libturbojpeg libjpeg-turbo8-dev

安裝 OpenGL

sudo dpkg -i debs/libglfw3*deb sudo apt-get install -f sudo apt-get install libgl1-mesa-dri-lts-vivid

(If the last command conflicts with other packages, don’t do it.) 原文如上提示, 我在安裝的時候, 第三條指令確實出現(xiàn)了錯誤, 就直接忽略第三條指令了。

----安裝 OpenCL (可選)
-----------Intel GPU:

sudo apt-add-repository ppa:floe/beignet sudo apt-get update sudo apt-get install beignet-dev sudo dpkg -i debs/ocl-icd*deb

-----------AMD GPU: apt-get install opencl-headers

-----------驗證安裝: clinfo
----安裝 CUDA (可選, Nvidia only):

-----------(Ubuntu 14.04 only) Download cuda-repo-ubuntu1404…*.deb (“deb (network)”) from Nvidia website, follow their installation instructions, including apt-get install cuda which installs Nvidia graphics driver.
-----------(Jetson TK1) It is preloaded.
-----------(Nvidia/Intel dual GPUs) After apt-get install cuda, use sudo prime-select intel to use Intel GPU for desktop.
-----------(Other) Follow Nvidia website’s instructions.
----安裝 VAAPI (可選, Intel only)sudo dpkg -i debs/{libva,i965}*deb; sudo apt-get install
----安裝 OpenNI2 (可選)

powershell sudo apt-add-repository ppa:deb-rob/ros-trusty && sudo apt-get update sudo apt-get install libopenni2-dev

----Build

powershell cd .. mkdir build && cd build cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/freenect2 -DENABLE_CXX11=ON make make install

針對上面cmake命令的說明, 第一個參數(shù), 是特別指定安裝的位置, 你也可以指定別的你覺得高興的地方, 但一般標(biāo)準(zhǔn)的路徑是上述示例路徑或者/usr/local。 第二個參數(shù)是增加C++11的支持。

----設(shè)定udev rules: sudo cp ../platform/linux/udev/90-kinect2.rules /etc/udev/rules.d/, 然后重新插拔Kinect2.
----一切搞定, 現(xiàn)在可以嘗試運行Demo程序: ./bin/Protonect, 不出意外, 應(yīng)該能夠看到如下效果:

iai-kinect2

利用命令行從Github上面下載工程源碼到工作空間內(nèi)src文件夾內(nèi):

cd ~/catkin_ws/src/ git clone https://github.com/code-iai/iai_kinect2.git cd iai_kinect2 rosdep install -r --from-paths . cd ~/catkin_ws catkin_make -DCMAKE_BUILD_TYPE="Release"

針對于上述命令中最后一行指令, 需要說明的是, 如果前面libfreenect2你安裝的位置不是標(biāo)準(zhǔn)的兩個路徑下, 需要提供參數(shù)指定libfreenect2所在路徑:

catkin_make -Dfreenect2_DIR=path_to_freenect2/lib/cmake/freenect2 -DCMAKE_BUILD_TYPE="Release"

編譯結(jié)束, 一切OK的話, 會看到如下提示:

最后就是激動人心的時刻了, 在ROS中獲取Kinect2的數(shù)據(jù)。
PS: 很多同學(xué)在運行下屬命令時,時常會遇到不能執(zhí)行的問題,大部分情況是沒有source對應(yīng)的目錄。應(yīng)該先執(zhí)行source ~/catkin_ws/devel/setup.bash,若對應(yīng)已經(jīng)寫入~/.bashrc文件的同學(xué),可以忽略。

roslaunch kinect2_bridge kinect2_bridge.launch


使用roslaunch發(fā)起kinect2相關(guān)節(jié)點, 可以看到如下結(jié)果, 在另外一個命令行中, 輸入rostopic list, 可以查看到該節(jié)點發(fā)布出來的Topic, 還可以輸入rosrun kinect2_viewer kinect2_viewer sd cloud, 來開啟一個Viewer查看數(shù)據(jù)。 結(jié)果如下圖所示:

簡單運用

很久沒有留意這篇博客了, 上面的內(nèi)容, 是之前一個工作中需要用到Kinect2, 所以試著弄了一下, 將其整理下來形成這篇博客的.

今天突然有一個同學(xué)在站內(nèi)給我私信, 問我這篇博客的內(nèi)容. 分享出來的東西幫助到了別人, 確實很高興! 關(guān)于這位同學(xué)問到的問題, 其實在前面的工作中, 我也實現(xiàn)過. 下面試著回憶整理一下.

保存圖片

其實目的就一個, 將Kinect2的RGB圖保存下來. 在前面的敘述中, 輸入rosrun kinect2_viewer kinect2_viewer sd cloud來查看顯示效果. 這句命令實質(zhì)就是運行kinect2_viewer包中的kinect2_viewer節(jié)點, 給定兩個參數(shù)sd 和 cloud. 進(jìn)入這個包的路徑, 是可以看到這個節(jié)點源碼. 源碼中主函數(shù)摘錄如下:

int main(int argc, char **argv) {... ... // 省略了部分代碼topicColor = "/" + ns + topicColor;topicDepth = "/" + ns + topicDepth;OUT_INFO("topic color: " FG_CYAN << topicColor << NO_COLOR);OUT_INFO("topic depth: " FG_CYAN << topicDepth << NO_COLOR);// Receiver是一個類, 也定義在該文件中.useExact(true), useCompressed(false)Receiver receiver(topicColor, topicDepth, useExact, useCompressed);OUT_INFO("starting receiver...");// 運行時給出參數(shù)"cloud", 則mode = Receiver::CLOUD// 運行時給出參數(shù)"image", 則mode = Receiver::IMAGE// 運行時給出參數(shù)"both", 則mode = Receiver::BOTHreceiver.run(mode);ros::shutdown();return 0; }

Receiver類的實現(xiàn)也不算太復(fù)雜. 其中用于顯示的主循環(huán)在imageViewer()或cloudViewer()中. 依據(jù)給的參數(shù)的不同, 將會調(diào)用不同的函數(shù). 對應(yīng)關(guān)系如下所示:

switch(mode) { case CLOUD:cloudViewer();break; case IMAGE:imageViewer();break; case BOTH:imageViewerThread = std::thread(&Receiver::imageViewer, this);cloudViewer();break; }

BOTH選項, 將會出現(xiàn)兩個窗口來顯示圖像. 上面兩個函數(shù)都已經(jīng)實現(xiàn)了圖片保存的功能.代碼摘錄如下, 兩個函數(shù)實現(xiàn)類似, 故只是摘錄了imageViewer()的內(nèi)容:

int key = cv::waitKey(1); switch(key & 0xFF) { case 27: case 'q':running = false;break; case ' ': case 's':if(mode == IMAGE) {createCloud(depth, color, cloud);saveCloudAndImages(cloud, color, depth, depthDisp);} else {save = true;}break; }

其中關(guān)鍵函數(shù)saveCloudAndImages()的實現(xiàn)如下:

void saveCloudAndImages(const pcl::PointCloud<pcl::PointXYZRGBA>::ConstPtr cloud,const cv::Mat &color, const cv::Mat &depth, const cv::Mat &depthColored) {oss.str("");oss << "./" << std::setfill('0') << std::setw(4) << frame;// 所有文件都保存在當(dāng)前路徑下const std::string baseName = oss.str();const std::string cloudName = baseName + "_cloud.pcd";const std::string colorName = baseName + "_color.jpg";const std::string depthName = baseName + "_depth.png";const std::string depthColoredName = baseName + "_depth_colored.png";OUT_INFO("saving cloud: " << cloudName);// writer是該類的pcl::PCDWriter類型的成員變量writer.writeBinary(cloudName, *cloud);OUT_INFO("saving color: " << colorName);cv::imwrite(colorName, color, params);OUT_INFO("saving depth: " << depthName);cv::imwrite(depthName, depth, params);OUT_INFO("saving depth: " << depthColoredName);cv::imwrite(depthColoredName, depthColored, params);OUT_INFO("saving complete!");++frame; }

從上面代碼中可以看出來, 如果想要保存圖片, 只需要選中顯示圖片的窗口, 然后輸入單擊鍵盤s鍵或者空格鍵就OK, 保存的圖片就在當(dāng)前目錄.

如果有一些特別的需求, 在上面源碼的基礎(chǔ)上來進(jìn)行實現(xiàn), 將會事半功倍. 下面就是一個小小的例子.

保存圖片序列

如果想要保存一個圖片序列, 僅僅控制開始結(jié)束, 例如, 按鍵B(Begin)開始保存, 按鍵E(End)結(jié)束保存.

完成這樣一個功能, 在源碼的基礎(chǔ)上進(jìn)行適當(dāng)更改就可以滿足要求. 首選, 需要每一次對按鍵B和E的判斷, 需要新增到上面摘錄的switch(key & 0xFF)塊中. 需要連續(xù)保存的話, 簡單的設(shè)定一個標(biāo)志位即可.

首先, 復(fù)制vewer.cpp文件, 命名為save_seq.cpp. 修改save_seq.cpp文件, 在Receiver類中bool save變量下面添加一個新的成員變量, bool save_seq. 在類的構(gòu)造函數(shù)的初始化列表中新增該變量的初始化save_seq(false).

定位到void imageViewer()函數(shù), 修改對應(yīng)的switch(key & 0xFF)塊如下:

int key = cv::waitKey(1); switch(key & 0xFF) { case 27: case 'q':running = false;break; case 'b': save_seq = true; save = true; break; case 'e': save_seq = false; save = false; break; case ' ': case 's':if (save_seq) break;if(mode == IMAGE) {createCloud(depth, color, cloud);saveCloudAndImages(cloud, color, depth, depthDisp);} else {save = true;}break; } if (save_seq) {createCloud(depth, color, cloud);saveCloudAndImages(cloud, color, depth, depthDisp); }

定位到void cloudViewer()函數(shù), 修改對應(yīng)的if (save)塊如下:

if(save || save_seq) {save = false;cv::Mat depthDisp;dispDepth(depth, depthDisp, 12000.0f);saveCloudAndImages(cloud, color, depth, depthDisp); }

定位到void keyboardEvent(const pcl::visualization::KeyboardEvent &event, void *)函數(shù), 修改源碼如下:

if(event.keyUp()) {switch(event.getKeyCode()) {case 27:case 'q':running = false;break;case ' ':case 's':save = true;break;case 'b':save_seq = true;break;case 'e':save_seq = false;break;} }

在CMakeList.txt的最后, 添加如下指令:

add_executable(save_seq src/save_seq.cpp) target_link_libraries(save_seq${catkin_LIBRARIES}${OpenCV_LIBRARIES}${PCL_LIBRARIES}${kinect2_bridge_LIBRARIES} )

返回到catkin主目錄, 運行catkin_make, 編譯, 鏈接沒有問題的話, 就可以查看效果了. 當(dāng)然了, 首先是需要啟動kinect_bride的. 依次運行下述指令:

roslaunch kinect2_bridge kinect2_bridge.launch rosrun kinect2_viewer save_seq hd cloud

選中彈出的窗口, 按鍵B 開始, 按鍵E結(jié)束保存. Terminal輸出結(jié)果如下:

點擊點云圖獲取坐標(biāo)

主要想法是, 鼠標(biāo)在點云圖中移動時, 實時顯示當(dāng)前鼠標(biāo)所指的點的三維坐標(biāo), 點擊鼠標(biāo)時, 獲取該坐標(biāo)發(fā)送出去.

這樣的話, 首先就需要對鼠標(biāo)有一個回調(diào)函數(shù), 當(dāng)鼠標(biāo)狀態(tài)改變時, 做出對應(yīng)的變化. 鼠標(biāo)變化可以簡化為三種情況:

----鼠標(biāo)移動
----鼠標(biāo)左鍵點擊
----鼠標(biāo)右鍵點擊
因為涉及到回調(diào)函數(shù), 而且也只是一個小功能, 代碼實現(xiàn)很簡單. 直接使用了三個全局變量代表這三個狀態(tài)(代碼需要支持C++11, 不想那么麻煩的話, 可以將類型更改為volatile int), 以及一個全局的普通函數(shù):

std::atomic_int mouseX; std::atomic_int mouseY; std::atomic_int mouseBtnType;void onMouse(int event, int x, int y, int flags, void* ustc) {// std::cout << "onMouse: " << x << " " << y << std::endl;mouseX = x;mouseY = y;mouseBtnType = event; }

在初始化中添加兩個ros::Publisher, 分別對應(yīng)鼠標(biāo)左鍵和右鍵壓下應(yīng)該發(fā)布數(shù)據(jù)到達(dá)的話題. 如下所示:

ros::Publisher leftBtnPointPub =nh.advertise<geometry_msgs::PointStamped>("/kinect2/click_point/left", 1); ros::Publisher rightBtnPointPub =nh.advertise<geometry_msgs::PointStamped>("/kinect2/click_point/right", 1);

其中消息格式是geometry_msgs/PointStamped, ROS自帶的消息, 在源碼頭部需要包含頭文件, #include <geometry_msgs/PointStamped.h>, 具體定義如下:

std_msgs/Header headeruint32 seqtime stampstring frame_id geometry_msgs/Point pointfloat64 xfloat64 yfloat64 z

然后再重寫cloudViewer()函數(shù)如下:

void cloudViewer() {cv::Mat color, depth;std::chrono::time_point<std::chrono::high_resolution_clock> start, now;double fps = 0;size_t frameCount = 0;std::ostringstream oss;std::ostringstream ossXYZ; // 新增一個string流const cv::Point pos(5, 15);const cv::Scalar colorText = CV_RGB(255, 0, 0);const double sizeText = 0.5;const int lineText = 1;const int font = cv::FONT_HERSHEY_SIMPLEX;// 從全局變量獲取當(dāng)前鼠標(biāo)坐標(biāo)int img_x = mouseX;int img_y = mouseY;geometry_msgs::PointStamped ptMsg;ptMsg.header.frame_id = "kinect_link";lock.lock();color = this->color;depth = this->depth;updateCloud = false;lock.unlock();const std::string window_name = "color viewer";cv::namedWindow(window_name);// 注冊鼠標(biāo)回調(diào)函數(shù), 第三個參數(shù)是C++11中的關(guān)鍵字, 若不支持C++11, 替換成NULLcv::setMouseCallback(window_name, onMouse, nullptr);createCloud(depth, color, cloud);for(; running && ros::ok();) {if(updateCloud) {lock.lock();color = this->color;depth = this->depth;updateCloud = false;lock.unlock();createCloud(depth, color, cloud);img_x = mouseX;img_y = mouseY;const pcl::PointXYZRGBA& pt = cloud->points[img_y * depth.cols + img_x];ptMsg.point.x = pt.x;ptMsg.point.y = pt.y;ptMsg.point.z = pt.z;// 根據(jù)鼠標(biāo)左鍵壓下或右鍵壓下, 分別發(fā)布三維坐標(biāo)到不同的話題上去switch (mouseBtnType) {case cv::EVENT_LBUTTONUP:ptMsg.header.stamp = ros::Time::now();leftBtnPointPub.publish(ptMsg);ros::spinOnce();break;case cv::EVENT_RBUTTONUP:ptMsg.header.stamp = ros::Time::now();rightBtnPointPub.publish(ptMsg);ros::spinOnce();break;default:break;}mouseBtnType = cv::EVENT_MOUSEMOVE;++frameCount;now = std::chrono::high_resolution_clock::now();double elapsed =std::chrono::duration_cast<std::chrono::milliseconds>(now - start).count() / 1000.0;if(elapsed >= 1.0) {fps = frameCount / elapsed;oss.str("");oss << "fps: " << fps << " ( " << elapsed / frameCount * 1000.0 << " ms)";start = now;frameCount = 0;}cv::putText(color, oss.str(), pos, font, sizeText, colorText, lineText, CV_AA);ossXYZ.str("");ossXYZ << "( " << ptMsg.point.x << ", " << ptMsg.point.y<< ", " << ptMsg.point.z << " )";cv::putText(color, ossXYZ.str(), cv::Point(img_x, img_y), font, 1, colorText, 3, CV_AA);// cv::circle(color, cv::Point(mouseX, mouseY), 5, cv::Scalar(0, 0, 255), -1);cv::imshow(window_name, color);// cv::imshow(window_name, depth);cv::waitKey(1);}}cv::destroyAllWindows();cv::waitKey(100); }

最后, 稍微改寫一下主函數(shù)就OK了, 整個主函數(shù)摘錄如下, 其中去掉了多余的參數(shù)解析, 讓使用更加固定, 簡單.

int main(int argc, char **argv) { #if EXTENDED_OUTPUTROSCONSOLE_AUTOINIT;if(!getenv("ROSCONSOLE_FORMAT")){ros::console::g_formatter.tokens_.clear();ros::console::g_formatter.init("[${severity}] ${message}");} #endifros::init(argc, argv, "kinect2_viewer", ros::init_options::AnonymousName);if(!ros::ok()) {return 0;}std::string ns = K2_DEFAULT_NS;std::string topicColor = K2_TOPIC_HD K2_TOPIC_IMAGE_COLOR K2_TOPIC_IMAGE_RECT;std::string topicDepth = K2_TOPIC_HD K2_TOPIC_IMAGE_DEPTH K2_TOPIC_IMAGE_RECT;bool useExact = true;bool useCompressed = false;Receiver::Mode mode = Receiver::CLOUD;topicColor = "/" + ns + topicColor;topicDepth = "/" + ns + topicDepth;OUT_INFO("topic color: " FG_CYAN << topicColor << NO_COLOR);OUT_INFO("topic depth: " FG_CYAN << topicDepth << NO_COLOR);Receiver receiver(topicColor, topicDepth, useExact, useCompressed);OUT_INFO("starting receiver...");OUT_INFO("Please click mouse in color viewer...");receiver.run(mode);ros::shutdown();return 0; }

在CMakeList.txt中加入下面兩段話, 然后make就OK.

add_executable(click_rgb src/click_rgb.cpp) target_link_libraries(click_rgb${catkin_LIBRARIES}${OpenCV_LIBRARIES}${PCL_LIBRARIES}${kinect2_bridge_LIBRARIES} )install(TARGETS click_rgbRUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION} )

運行的話, 輸入rosrun kinect2_viewer click_rgb就OK. 效果如下圖所示, 圖中紅色坐標(biāo)位置, 實際是鼠標(biāo)所在位置, 截圖時, 鼠標(biāo)截不下來.

基于深度相機D415的彩色與深度圖像匹配

環(huán)境配置可以參考考另一篇博文

  • 最近在用D435做物體識別抓取的時候發(fā)現(xiàn)二維坐標(biāo)到三維坐標(biāo)沒有一個直接能用的從二維像素點坐標(biāo)轉(zhuǎn)換到三維坐標(biāo)的ROS節(jié)點,自己之前寫過的kinect V2的坐標(biāo)映射的通用性太差,所以這次寫了一個節(jié)點,利用深度相機ROS節(jié)點發(fā)布的深度與彩色對齊的圖像話題和圖像信息內(nèi)外參話題,將二維坐標(biāo)映射到三維坐標(biāo)系。
  • 我掛上來的這個節(jié)點是將鼠標(biāo)指向的二維坐標(biāo)轉(zhuǎn)換到三維坐標(biāo)系下,并發(fā)布話題可視化到了rviz中,因為我自己的物體識別發(fā)布的像素坐標(biāo)話題是一個自己寫的消息文件,這次就不放出來了,需要用的把我這里的鼠標(biāo)反饋的坐標(biāo)改成你自己的坐標(biāo)就行了。
  • 原理:深度相機會發(fā)布一個裁剪對齊好的深度圖像或者彩色圖像話題,以D415為例,當(dāng)我們輸入:roslaunch realsense2_camera rs_camera.launch后就可以啟動程序。并看到它的ROS節(jié)點發(fā)布了/camera/aligned_depth_to_color/image_raw(即深度對齊到彩色的話題),這樣只需要找到彩色圖像的坐標(biāo)影色到它的坐標(biāo)讀取一下深度,再通過內(nèi)參矩陣計算就行了,而內(nèi)參矩陣也通過了/camera/aligned_depth_to_color/camera_info話題發(fā)布出來,直接讀取即可。
  • 效果圖:(ps.為了滿足gif尺寸限制只能犧牲下幀率了,左邊的鼠標(biāo)映射到右邊的粉色的球)

    代碼:

coordinate_map.cpp

/********************** coordinate_map.cpp author: wxw email: wangxianwei1994@foxmail.com time: 2019-7-29 **********************/#include <iostream> #include <string> #include <opencv2/opencv.hpp> #include <ros/ros.h> #include <cv_bridge/cv_bridge.h> #include <geometry_msgs/PointStamped.h> #include <image_transport/image_transport.h> #include <sensor_msgs/CameraInfo.h> #include <sensor_msgs/Image.h>using namespace cv; using namespace std; class ImageConverter { private:ros::NodeHandle nh_;image_transport::ImageTransport it_;image_transport::Subscriber image_sub_color;//接收彩色圖像image_transport::Subscriber image_sub_depth;//接收深度圖像ros::Subscriber camera_info_sub_;//接收深度圖像對應(yīng)的相機參數(shù)話題ros::Publisher arm_point_pub_;//發(fā)布一個三維坐標(biāo)點,可用于可視化sensor_msgs::CameraInfo camera_info;geometry_msgs::PointStamped output_point;/* Mat depthImage,colorImage; */Mat colorImage;Mat depthImage = Mat::zeros( 480, 640, CV_16UC1 );//注意這里要修改為你接收的深度圖像尺寸Point mousepos = Point( 0, 0 ); /* mousepoint to be map */public://獲取鼠標(biāo)的坐標(biāo),通過param指針傳出到類成員Point mouseposstatic void on_mouse( int event, int x, int y, int flags, void *param ){switch ( event ){case CV_EVENT_MOUSEMOVE: /* move mouse */{Point &tmppoint = *(cv::Point *) param;tmppoint = Point( x, y );} break;}}ImageConverter() : it_( nh_ ){//topic sub:image_sub_depth = it_.subscribe( "/camera/aligned_depth_to_color/image_raw",1, &ImageConverter::imageDepthCb, this );image_sub_color = it_.subscribe( "/camera/color/image_raw", 1,&ImageConverter::imageColorCb, this );camera_info_sub_ =nh_.subscribe( "/camera/aligned_depth_to_color/camera_info", 1,&ImageConverter::cameraInfoCb, this );//topic pub:arm_point_pub_ =nh_.advertise<geometry_msgs::PointStamped>( "/mouse_point", 10 );cv::namedWindow( "colorImage" );setMouseCallback( "colorImage", &ImageConverter::on_mouse,(void *) &mousepos );}~ImageConverter(){cv::destroyWindow( "colorImage" );}void cameraInfoCb( const sensor_msgs::CameraInfo &msg ){camera_info = msg;}void imageDepthCb( const sensor_msgs::ImageConstPtr &msg ){cv_bridge::CvImagePtr cv_ptr;try {cv_ptr =cv_bridge::toCvCopy( msg, sensor_msgs::image_encodings::TYPE_16UC1 );depthImage = cv_ptr->image;} catch ( cv_bridge::Exception &e ) {ROS_ERROR( "cv_bridge exception: %s", e.what() );return;}}void imageColorCb( const sensor_msgs::ImageConstPtr &msg ){cv_bridge::CvImagePtr cv_ptr;try {cv_ptr = cv_bridge::toCvCopy( msg, sensor_msgs::image_encodings::BGR8 );colorImage = cv_ptr->image;} catch ( cv_bridge::Exception &e ) {ROS_ERROR( "cv_bridge exception: %s", e.what() );return;}//先查詢對齊的深度圖像的深度信息,根據(jù)讀取的camera info內(nèi)參矩陣求解對應(yīng)三維坐標(biāo)float real_z = 0.001 * depthImage.at<u_int16_t>( mousepos.y, mousepos.x );float real_x =(mousepos.x - camera_info.K.at( 2 ) ) / camera_info.K.at( 0 ) * real_z;float real_y =(mousepos.y - camera_info.K.at( 5 ) ) / camera_info.K.at( 4 ) * real_z;char tam[100];sprintf( tam, "(%0.2f,%0.2f,%0.2f)", real_x, real_y, real_z );putText( colorImage, tam, mousepos, FONT_HERSHEY_SIMPLEX, 0.6,cvScalar( 0, 0, 255 ), 1 );//打印到屏幕上circle( colorImage, mousepos, 2, Scalar( 255, 0, 0 ) );output_point.header.frame_id = "/camera_depth_optical_frame";output_point.header.stamp = ros::Time::now();output_point.point.x = real_x;output_point.point.y = real_y;output_point.point.z = real_z;arm_point_pub_.publish( output_point );cv::imshow( "colorImage", colorImage );cv::waitKey( 1 );} };int main( int argc, char **argv ) {ros::init( argc, argv, "coordinate_map" );ImageConverter imageconverter;ros::spin();return(0); }

CMakeList.txt

cmake_minimum_required(VERSION 2.8.3) project(coordinate_map)## Compile as C++11, supported in ROS Kinetic and newer add_compile_options(-std=c++11)find_package(catkin REQUIRED COMPONENTS roscpp rostime std_msgs sensor_msgs message_filters cv_bridge image_transport compressed_image_transport tf compressed_depth_image_transport geometry_msgs )## System dependencies are found with CMake's conventions find_package(OpenCV REQUIRED)catkin_package()include_directories(include${catkin_INCLUDE_DIRS}${OpenCV_INCLUDE_DIRS} )add_executable(coordinate_map src/coordinate_map.cpp) target_link_libraries(coordinate_map${catkin_LIBRARIES}${OpenCV_LIBRARIES} )

package.xml

<?xml version="1.0"?> <package><name>coordinate_map</name><version>1.0.0</version><description>coordinate_map package</description><maintainer email="wangxianwei1994@foxmail.com">Wangxianwei</maintainer><license>BSD</license><buildtool_depend>catkin</buildtool_depend><build_depend>roscpp</build_depend><build_depend>rostime</build_depend><build_depend>std_msgs</build_depend><build_depend>sensor_msgs</build_depend><build_depend>message_filters</build_depend><build_depend>image_transport</build_depend><build_depend>compressed_image_transport</build_depend><build_depend>compressed_depth_image_transport</build_depend><build_depend>cv_bridge</build_depend><build_depend>tf</build_depend><build_depend>nav_msgs</build_depend><run_depend>message_runtime</run_depend><run_depend>roscpp</run_depend><run_depend>rostime</run_depend><run_depend>std_msgs</run_depend><run_depend>sensor_msgs</run_depend><run_depend>message_filters</run_depend><run_depend>image_transport</run_depend><run_depend>compressed_image_transport</run_depend><run_depend>compressed_depth_image_transport</run_depend><run_depend>cv_bridge</run_depend><run_depend>tf</run_depend><run_depend>nav_msgs</run_depend><export></export> </package>

總結(jié)

以上是生活随笔為你收集整理的常用摄像头一些点云深度矫正ROS程序(ZED kinect v2 D415)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。