日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

图漾深度相机开发-PCL点云实时显示

發布時間:2024/3/24 编程问答 34 豆豆
生活随笔 收集整理的這篇文章主要介紹了 图漾深度相机开发-PCL点云实时显示 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

目錄

  • 1. 從示例程序 SimpleView_FetchFrame 開始
    • 程序功能
    • 程序解讀
  • 2. 創建自己的點云處理程序
    • 文件結構
    • 創建點云
    • 點云圖實時顯示完整代碼
  • 3. 新建工程


  • 相機型號:圖漾科技 FS820 深度相機
    【參數信息】【深度相機開發說明文檔】【SDK下載】
  • 編譯環境:Ubuntu 18.04 / C++ / VS code
  • 安裝庫:OpenCV + PCL
  • 圖漾深度相機初步使用流程見博客,在能簡單應用相機示例程序的基礎上,對相機進行開發,以實現三維點云處理,本文實現的功能是顯示實時點云圖

1. 從示例程序 SimpleView_FetchFrame 開始

程序功能

SimpleView_FetchFrame 是深度相機獲取圖像數據并在數據獲取線程中進行 OpenCV 渲染的示例程序,以此為例說明圖像獲取流程【圖像獲取的完整流程】

運行程序,生成 color彩色圖像、depth深度圖像、leftIR、rightIR 窗口

程序解讀

打開 sample/SimpleView_FetchFrame/main.cpp,解讀代碼:

從主函數開始閱讀,可以看到多個 LOGD() 函數,這些函數實現的是打印功能,相當于程序中的注釋(如 LOGD("Init lib"),說明下一段代碼的功能是初始化 API,初始化設備對象等數據結構)

對于開發者而言,我們需要關注的是如何獲取相機的數據,以進行后續的處理,也就是下圖中的 Loop 循環部分,這一循環的作用是不斷獲取相機的幀數據,并對數據進行處理(本例中的處理效果即為生成 color彩色圖像、depth深度圖像、leftIR、rightIR 窗口)

因此我們繼續往下閱讀代碼,讀到 LOGD("While loop to fetch frame") 語句,下面一段程序的功能就是獲取相機幀循環,貼出代碼進行解讀:

LOGD("While loop to fetch frame");bool exit_main = false; TY_FRAME_DATA frame;int index = 0;while(!exit_main) {int err = TYFetchFrame(hDevice, &frame, -1);if( err == TY_STATUS_OK ) {LOGD("Get frame %d", ++index);int fps = get_fps();if (fps > 0){LOGI("fps: %d", fps);}cv::Mat depth, irl, irr, color;parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle);if(!depth.empty()){depthViewer.show(depth);}if(!irl.empty()){ cv::imshow("LeftIR", irl); }if(!irr.empty()){ cv::imshow("RightIR", irr); }if(!color.empty()){ cv::imshow("Color", color); }int key = cv::waitKey(1);switch(key & 0xff) {case 0xff:break;case 'q':exit_main = true;break;default:LOGD("Unmapped key %d", key);}TYISPUpdateDevice(hColorIspHandle);LOGD("Re-enqueue buffer(%p, %d)", frame.userBuffer, frame.bufferSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frame.userBuffer, frame.bufferSize) );}}

首先定義了 bool 型變量exit_main:作為循環的標志位,while(!exit_main) 表示當 exit_main = 1 時循環結束

  • Fetch Frame
    這一段代碼的功能是獲取相機的幀信息,即 frame:
int err = TYFetchFrame(hDevice, &frame, -1);if( err == TY_STATUS_OK ) {LOGD("Get frame %d", ++index);int fps = get_fps();if (fps > 0){LOGI("fps: %d", fps);}

這段代碼的核心部分為:TYFetchFrame(hDevice, &frame, -1),函數功能為 Fetch one frame,即通過輸入 hDevice 這一參數,獲取一幀相機的信息到 frame 中,如果成功獲取幀信息,則返回值為 TY_STATUS_OK

當 err == TY_STATUS_OK (成功獲取幀信息)時,會打印信息:Get frame + (index 的值),表示當前獲取的是第幾幀,index 在每次循環中加1,如下圖所示:

  • Parse Frame
    這一段代碼的功能是解析獲取的幀信息:
cv::Mat depth, irl, irr, color; parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle);

首先定義 cv::Mat 類型的深度圖 depth,彩色圖 color,左紅外圖像 irl,右紅外圖像 irr
接著通過 parseFrame() 函數解析 frame,分別生成深度圖、左右紅外圖和彩色圖

  • User Process
    在解析幀后,我們成功得到了相機的深度圖 depth 和彩色圖 color 等,用戶就可以利用獲取的數據進行處理和開發了,示例程序中實現的是簡單的圖像顯示功能,即分別可視化深度圖、左右紅外圖和彩色圖:
if(!depth.empty()){depthViewer.show(depth);}if(!irl.empty()){ cv::imshow("LeftIR", irl); }if(!irr.empty()){ cv::imshow("RightIR", irr); }if(!color.empty()){ cv::imshow("Color", color); }int key = cv::waitKey(1);switch(key & 0xff) {case 0xff:break;case 'q':exit_main = true;break;default:LOGD("Unmapped key %d", key);}

如果在 openCV 的圖窗中,鍵盤按下 q 鍵,則exit_main = true,整個幀循環會結束

  • Return Frame Buffer
TYISPUpdateDevice(hColorIspHandle); LOGD("Re-enqueue buffer(%p, %d)" , frame.userBuffer, frame.bufferSize); ASSERT_OK( TYEnqueueBuffer(hDevice, frame.userBuffer, frame.bufferSize) );

更新設備狀態,將 frame buffer 推入幀緩沖隊列

2. 創建自己的點云處理程序

文件結構

最簡單的方式是直接在 sample 文件夾創建一個新的文件夾例如 point3D,并在該文件夾中創建 main.cpp ,接著在 CMakeLists.txt 中修改以下部分即可:

set(ALL_SAMPLESpoint3D # 加上自己命名的文件夾DumpAllFeaturesListDevices...

在 sample/build 目錄下打開終端,重新編譯運行即可:

cmake .. make cd bin sudo ./point3D

創建點云

根據對示例程序的分析可知,通過 parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle) 語句可以獲取相機的深度圖和彩色圖,處理深度圖得到位置信息 (x,y,z)(x, y, z)(x,y,z) ,處理彩色圖得到顏色信息(r,g,b)(r, g, b)(r,g,b) ,最終生成包含顏色信息的點云圖

使用 Point CLoud Library 處理點云,首先需要安裝 PCL 庫:

  • 安裝 PCL 庫
sudo apt install libpcl-dev
  • 修改 CMakeLists.txt 添加 PCL 庫

添加如下語句:

# ======================================== # === PCL # ======================================== find_package(PCL 1.8 REQUIRED)include_directories(${PCL_INCLUDE_DIRS})link_directories(${PCL_LIBRARY_DIRS})add_definitions(${PCL_DEFINITIONS})

修改倒數第五行:

target_link_libraries(${sample} sample_common ${ABSOLUTE_TARGET_LIB} ${OpenCV_LIBS} ${CLOUD_VIEWER} ${PCL_LIBRARIES})
  • Map depth image to 3D points
    根據深度相機的標定參數,將深度圖映射為三維點云:

(1) 首先需要獲取深度相機的標定參數,根據官方文檔可知,利用 TYGetStruct() 函數即可:

TY_CAMERA_CALIB_INFO depth_calib; TYGetStruct(hDevice, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA, &depth_calib,sizeof(depth_calib)); // 提取深度相機的標定數據

(2) 接著將深度圖轉換為三維數據:

std::vector<TY_VECT_3F> p3d; // p3d 用于存儲三維數據 TYMapDepthImageToPoint3d(&depth_calib, depth.cols, depth.rows , (uint16_t*)depth.data, &p3d[0]); // 輸入深度數據和標定數據,輸出三維數據

p3d[i].x 表示第 i 個點的 x值;p3d[i].y 表示第 i 個點的 y值;p3d[i].z 表示第 i 個點的 z值

  • Map original RGB image to depth coordinate RGB image
    根據彩色相機的標定參數,將彩色圖與深度圖對齊:

(1) 首先需要獲取彩色相機的標定參數,根據官方文檔可知,利用 TYGetStruct() 函數即可:

TY_CAMERA_CALIB_INFO color_calib; TYGetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA , &color_calib, sizeof(color_calib)); // 提取彩色相機的標定數據

(2) 彩色圖與深度圖對齊:
首先定義函數doRgbRegister(),實現對齊功能:

// 定義一個函數 doRgbRegister(),實現對齊功能 static void doRgbRegister(const TY_CAMERA_CALIB_INFO& depth_calib, const TY_CAMERA_CALIB_INFO& color_calib, const cv::Mat& depth, const cv::Mat& color, cv::Mat& out) {// do rgb undistortionTY_IMAGE_DATA src;src.width = color.cols;src.height = color.rows;src.size = color.size().area() * 3;src.pixelFormat = TY_PIXEL_FORMAT_RGB;src.buffer = color.data;cv::Mat undistort_color = cv::Mat(color.size(), CV_8UC3);TY_IMAGE_DATA dst;dst.width = color.cols;dst.height = color.rows;dst.size = undistort_color.size().area() * 3;dst.buffer = undistort_color.data;dst.pixelFormat = TY_PIXEL_FORMAT_RGB;TYUndistortImage(&color_calib, &src, NULL, &dst);// do registerout.create(depth.size(), CV_8UC3);TYMapRGBImageToDepthCoordinate(&depth_calib,depth.cols, depth.rows, depth.ptr<uint16_t>(),&color_calib,undistort_color.cols, undistort_color.rows, undistort_color.ptr<uint8_t>(),out.ptr<uint8_t>()); }

在主函數中調用函數doRgbRegister():

cv::Mat color_data_mat; // color_data_mat 為對齊后的彩色圖 if (!color.empty()) { bool hasColorCalib; TYHasFeature(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA, &hasColorCalib); // 查詢有無彩色相機標定參數這一屬性if (hasColorCalib) {doRgbRegister(depth_calib, color_calib, depth, color, color_data_mat); // 輸入深度相機標定數據、彩色相機標定數據、深度圖和彩色圖,輸出對齊后的彩色圖cv::cvtColor(color_data_mat, color_data_mat, cv::COLOR_BGR2RGB); // BGR 格式轉換為 RGB 格式} }
  • 生成 PointXYZRGB 類型點云(核心代碼)
pcl::PointCloud<pcl::PointXYZRGB> cloud; // 生成的點云 cloud pcl::PointXYZRGB point; for (int m = 0; m < depth.rows; m++){for (int n=0; n < depth.cols; n++){point.x = p3d[(m*(depth.cols)+n)].x;point.y = p3d[(m*(depth.cols)+n)].y;point.z = p3d[(m*(depth.cols)+n)].z;point.r = color_data_mat.at<cv::Vec3b>(m, n)[0];point.g = color_data_mat.at<cv::Vec3b>(m, n)[1];point.b =color_data_mat.at<cv::Vec3b>(m, n)[2];cloud.points.push_back(point); // 構造xyzrgb類型點云}}cloud.width = (uint32_t)cloud.points.size();cloud.height = 1;
  • 點云可視化
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer1(new pcl::visualization::PCLVisualizer("3D Viewer"));pcl::PointCloud<pcl::PointXYZRGB>::Ptr basic_cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>); basic_cloud_ptr = cloud.makeShared(); // 轉換為指針格式 basic_cloud_ptr basic_cloud_ptr->is_dense = false; // 自己創建的點云,默認為dense,需要修改屬性,否則removenanfrompointcloud函數無效 pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>); std::vector<int> mapping; pcl::removeNaNFromPointCloud(*basic_cloud_ptr, *cloud_ptr, mapping); // 移除無效點viewer1->removeAllPointClouds(); // 移除當前所有點云 viewer1->addPointCloud<pcl::PointXYZRGB> (cloud_ptr, "initial"); viewer1->updatePointCloud(cloud_ptr, "initial"); viewer1->spinOnce(100);

點云圖實時顯示完整代碼

#include <TYApi.h> #include "TYImageProc.h" #include "../common.hpp"#include <vector>#include <opencv2/core/core.hpp> #include <opencv2/highgui/highgui.hpp>#include <pcl/point_types.h> #include <pcl/io/ply_io.h> #include <pcl/visualization/pcl_visualizer.h> #include <pcl/visualization/cloud_viewer.h> #include <pcl/io/io.h> #include <pcl/filters/filter.h> #include <pcl/common/impl/io.hpp>static void doRgbRegister(const TY_CAMERA_CALIB_INFO& depth_calib, const TY_CAMERA_CALIB_INFO& color_calib, const cv::Mat& depth, const cv::Mat& color, cv::Mat& out) {// do rgb undistortionTY_IMAGE_DATA src;src.width = color.cols;src.height = color.rows;src.size = color.size().area() * 3;src.pixelFormat = TY_PIXEL_FORMAT_RGB;src.buffer = color.data;cv::Mat undistort_color = cv::Mat(color.size(), CV_8UC3);TY_IMAGE_DATA dst;dst.width = color.cols;dst.height = color.rows;dst.size = undistort_color.size().area() * 3;dst.buffer = undistort_color.data;dst.pixelFormat = TY_PIXEL_FORMAT_RGB;ASSERT_OK(TYUndistortImage(&color_calib, &src, NULL, &dst));// do registerout.create(depth.size(), CV_8UC3);ASSERT_OK(TYMapRGBImageToDepthCoordinate(&depth_calib,depth.cols, depth.rows, depth.ptr<uint16_t>(),&color_calib,undistort_color.cols, undistort_color.rows, undistort_color.ptr<uint8_t>(),out.ptr<uint8_t>())); }void eventCallback(TY_EVENT_INFO *event_info, void *userdata) {if (event_info->eventId == TY_EVENT_DEVICE_OFFLINE) {LOGD("=== Event Callback: Device Offline!");// Note: // Please set TY_BOOL_KEEP_ALIVE_ONOFF feature to false if you need to debug with breakpoint!}else if (event_info->eventId == TY_EVENT_LICENSE_ERROR) {LOGD("=== Event Callback: License Error!");} }int main(int argc, char* argv[]) {std::string ID, IP;TY_INTERFACE_HANDLE hIface = NULL;TY_ISP_HANDLE hColorIspHandle = NULL;TY_DEV_HANDLE hDevice = NULL;int32_t color, ir, depth;color = ir = depth = 1;for(int i = 1; i < argc; i++) {if(strcmp(argv[i], "-id") == 0){ID = argv[++i];} else if(strcmp(argv[i], "-ip") == 0) {IP = argv[++i];} else if(strcmp(argv[i], "-color=off") == 0) {color = 0;} else if(strcmp(argv[i], "-depth=off") == 0) {depth = 0;} else if(strcmp(argv[i], "-ir=off") == 0) {ir = 0;} else if(strcmp(argv[i], "-h") == 0) {LOGI("Usage: SimpleView_FetchFrame [-h] [-id <ID>] [-ip <IP>]");return 0;}}LOGD("Init lib");ASSERT_OK( TYInitLib() );TY_VERSION_INFO ver;ASSERT_OK( TYLibVersion(&ver) );LOGD(" - lib version: %d.%d.%d", ver.major, ver.minor, ver.patch);std::vector<TY_DEVICE_BASE_INFO> selected;ASSERT_OK( selectDevice(TY_INTERFACE_ALL, ID, IP, 1, selected) );ASSERT(selected.size() > 0);TY_DEVICE_BASE_INFO& selectedDev = selected[0];ASSERT_OK( TYOpenInterface(selectedDev.iface.id, &hIface) );ASSERT_OK( TYOpenDevice(hIface, selectedDev.id, &hDevice) );int32_t allComps;ASSERT_OK( TYGetComponentIDs(hDevice, &allComps) );///try to enable color cameraif(allComps & TY_COMPONENT_RGB_CAM && color) {LOGD("Has RGB camera, open RGB cam");ASSERT_OK( TYEnableComponents(hDevice, TY_COMPONENT_RGB_CAM) );//create a isp handle to convert raw image(color bayer format) to rgb imageASSERT_OK(TYISPCreate(&hColorIspHandle));//Init code can be modified in common.hpp//NOTE: Should set RGB image format & size before init ISPASSERT_OK(ColorIspInitSetting(hColorIspHandle, hDevice));//You can call follow function to show color isp supported features #if 0ColorIspShowSupportedFeatures(hColorIspHandle); #endif//You can turn on auto exposure function as follow ,but frame rate may reduce .//Device may be casually stucked 1~2 seconds while software is trying to adjust device exposure time value #if 0ASSERT_OK(ColorIspInitAutoExposure(hColorIspHandle, hDevice)); #endif}if (allComps & TY_COMPONENT_IR_CAM_LEFT && ir) {LOGD("Has IR left camera, open IR left cam");ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_LEFT));}if (allComps & TY_COMPONENT_IR_CAM_RIGHT && ir) {LOGD("Has IR right camera, open IR right cam");ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_IR_CAM_RIGHT));}//try to enable depth mapLOGD("Configure components, open depth cam");if (allComps & TY_COMPONENT_DEPTH_CAM && depth) {int32_t image_mode;ASSERT_OK(get_default_image_mode(hDevice, TY_COMPONENT_DEPTH_CAM, image_mode));LOGD("Select Depth Image Mode: %dx%d", TYImageWidth(image_mode), TYImageHeight(image_mode));ASSERT_OK(TYSetEnum(hDevice, TY_COMPONENT_DEPTH_CAM, TY_ENUM_IMAGE_MODE, image_mode));ASSERT_OK(TYEnableComponents(hDevice, TY_COMPONENT_DEPTH_CAM));//depth map pixel format is uint16_t ,which default unit is 1 mm//the acutal depth (mm)= PixelValue * ScaleUnit }LOGD("Prepare image buffer");uint32_t frameSize;ASSERT_OK( TYGetFrameBufferSize(hDevice, &frameSize) );LOGD(" - Get size of framebuffer, %d", frameSize);LOGD(" - Allocate & enqueue buffers");char* frameBuffer[2];frameBuffer[0] = new char[frameSize];frameBuffer[1] = new char[frameSize];LOGD(" - Enqueue buffer (%p, %d)", frameBuffer[0], frameSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[0], frameSize) );LOGD(" - Enqueue buffer (%p, %d)", frameBuffer[1], frameSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frameBuffer[1], frameSize) );LOGD("Register event callback");ASSERT_OK(TYRegisterEventCallback(hDevice, eventCallback, NULL));bool hasTrigger;ASSERT_OK(TYHasFeature(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &hasTrigger));if (hasTrigger) {LOGD("Disable trigger mode");TY_TRIGGER_PARAM trigger;trigger.mode = TY_TRIGGER_MODE_OFF;ASSERT_OK(TYSetStruct(hDevice, TY_COMPONENT_DEVICE, TY_STRUCT_TRIGGER_PARAM, &trigger, sizeof(trigger)));}LOGD("Start capture");ASSERT_OK( TYStartCapture(hDevice) );LOGD("While loop to fetch frame");TY_FRAME_DATA frame;boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer1(new pcl::visualization::PCLVisualizer("3D Viewer"));while(!viewer1->wasStopped()){int err = TYFetchFrame(hDevice, &frame, -1);cv::Mat depth, irl, irr, color;parseFrame(frame, &depth, &irl, &irr, &color, hColorIspHandle);std::vector<TY_VECT_3F> p3d;TY_CAMERA_CALIB_INFO depth_calib; TY_CAMERA_CALIB_INFO color_calib;pcl::PointCloud<pcl::PointXYZRGB> cloud;pcl::PointXYZRGB point; p3d.resize(depth.size().area());TYGetStruct(hDevice, TY_COMPONENT_DEPTH_CAM, TY_STRUCT_CAM_CALIB_DATA, &depth_calib, sizeof(depth_calib)); // 提取深度相機的標定數據TYGetStruct(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA, &color_calib, sizeof(color_calib)); // 提取RGB相機的標定數據TYMapDepthImageToPoint3d(&depth_calib, depth.cols, depth.rows, (uint16_t*)depth.data, &p3d[0]); // 深度圖像->xyz點云cv::Mat color_data_mat;if (!color.empty()){bool hasColorCalib;TYHasFeature(hDevice, TY_COMPONENT_RGB_CAM, TY_STRUCT_CAM_CALIB_DATA, &hasColorCalib);if (hasColorCalib){doRgbRegister(depth_calib, color_calib, depth, color, color_data_mat);cv::cvtColor(color_data_mat, color_data_mat, cv::COLOR_BGR2RGB);}}for (int m = 0; m < depth.rows; m++){for (int n=0; n < depth.cols; n++){point.x = p3d[(m*(depth.cols)+n)].x;point.y = p3d[(m*(depth.cols)+n)].y;point.z = p3d[(m*(depth.cols)+n)].z;point.r = color_data_mat.at<cv::Vec3b>(m, n)[0];point.g = color_data_mat.at<cv::Vec3b>(m, n)[1];point.b =color_data_mat.at<cv::Vec3b>(m, n)[2];cloud.points.push_back(point); // 構造xyzrgb類型點云}}cloud.width = (uint32_t)cloud.points.size();cloud.height = 1;pcl::PointCloud<pcl::PointXYZRGB>::Ptr basic_cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>);basic_cloud_ptr = cloud.makeShared(); // 轉換為指針格式 basic_cloud_ptrbasic_cloud_ptr->is_dense = false; // 自己創建的點云,默認為dense,需要修改屬性,否則removenanfrompointcloud函數無效pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_ptr(new pcl::PointCloud<pcl::PointXYZRGB>);std::vector<int> mapping;pcl::removeNaNFromPointCloud(*basic_cloud_ptr, *cloud_ptr, mapping); // 移除無效點viewer1->removeAllPointClouds(); // 移除當前所有點云viewer1->addPointCloud<pcl::PointXYZRGB> (cloud_ptr, "initial"); viewer1->updatePointCloud(cloud_ptr, "initial"); viewer1->spinOnce(100);TYISPUpdateDevice(hColorIspHandle);LOGD("Re-enqueue buffer(%p, %d)", frame.userBuffer, frame.bufferSize);ASSERT_OK( TYEnqueueBuffer(hDevice, frame.userBuffer, frame.bufferSize));}ASSERT_OK( TYStopCapture(hDevice) );ASSERT_OK( TYCloseDevice(hDevice) );ASSERT_OK( TYCloseInterface(hIface) );ASSERT_OK(TYISPRelease(&hColorIspHandle));ASSERT_OK( TYDeinitLib() );LOGD("Main done!");return 0; }

3. 新建工程

如果不想使用官方 SDK 的文件結構,自己新建一個項目,可以新建工程文件夾 TYCamera,文件結構如圖:

除了 CMakeLists.txt 和 main.cpp,其他的文件都可以直接從官方 SDK 中拷貝,main.cpp 即為上一節中的點云圖實時顯示完整代碼

CMakeLists.txt 修改為:
(可以正常運行,但我對 cmake 不是很熟悉,寫法上可能有不規范之處)

cmake_minimum_required(VERSION 2.8) project(rgbd_camera)if (NOT TARGET tycam) #only build samples option (OpenCV_STATIC OFF)set(INCLUDE_PATH include)include_directories(${INCLUDE_PATH})set(ABSOLUTE_TARGET_LIB tycam)add_library(${ABSOLUTE_TARGET_LIB} SHARED IMPORTED)if(ARCH)set_property(TARGET ${ABSOLUTE_TARGET_LIB} PROPERTY IMPORTED_LOCATION ${CMAKE_CURRENT_SOURCE_DIR}/../lib/linux/lib_${ARCH}/libtycam.so)else()set(ABSOLUTE_TARGET_LIB -ltycam)endif() endif()# ======================================== # === common, build static lib to speed up # ======================================== set(COMMON_SOURCES src/MatViewer.cpp src/TYThread.cpp)add_library(sample_common STATIC ${COMMON_SOURCES})if(UNIX)target_link_libraries(sample_common pthread) endif()# ======================================== # === OpenCV # ======================================== set(OpenCV_DIR "/home/olefine_casia/opencv-4.5.1/build/") find_package(OpenCV REQUIRED) if (NOT OpenCV_FOUND)message(FATAL_ERROR "OpenCV library not found") else()include_directories(${OpenCV_INCLUDE_DIRS})include_directories(${OpenCV2_INCLUDE_DIRS})# if(CMAKE_BUILD_TYPE STREQUAL Debug)# link_directories(${OpenCV_LIB_DIR_DBG})# else()# link_directories(${OpenCV_LIB_DIR_OPT})# endif()link_directories(${OpenCV_LIB_DIR})endif()# ======================================== # === PCL # ======================================== find_package(PCL 1.8 REQUIRED)include_directories(${PCL_INCLUDE_DIRS})link_directories(${PCL_LIBRARY_DIRS})add_definitions(${PCL_DEFINITIONS})aux_source_directory(./src/ DIR_SRCS)add_executable(rgbd_camera ${DIR_SRCS})include_directories ( ${CMAKE_CURRENT_SOURCE_DIR}/include)target_link_libraries(rgbd_camera ${ABSOLUTE_TARGET_LIB} ${OpenCV_LIBS} ${PCL_LIBRARIES} )

總結

以上是生活随笔為你收集整理的图漾深度相机开发-PCL点云实时显示的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。