日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

利用光学流跟踪关键点---30

發(fā)布時間:2023/12/18 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 利用光学流跟踪关键点---30 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

原創(chuàng)博客:轉(zhuǎn)載請標(biāo)明出處:http://www.cnblogs.com/zxouxuewei/

?

關(guān)鍵點:是多個方向上亮度變化強的區(qū)域。

opencv:版本是2.4.

光學(xué)流函數(shù):calcOpticalFlowPyrLK()。(關(guān)鍵點偵測器使用goodFeaturesToTrack())二者結(jié)合。

相應(yīng)的啟動文件為:lk_tracker.launch

首先確保你的kinect驅(qū)動或者uvc相機驅(qū)動能正常啟動:(如果你使用的是kinect,請運行openni驅(qū)動)

roslaunch openni_launch openni.launch

  如果你沒有安裝kinect深度相機驅(qū)動,請看我前面的博文。

然后運行下面的launch文件:

roslaunch rbx1_vision good_features.launch

當(dāng)視頻出現(xiàn)時,通過鼠標(biāo)畫矩形將圖像中的某個對象框住。這個矩形表示所選的區(qū)域,你會看到這個區(qū)域中會出現(xiàn)一些綠色的小圓點,他們是goodFeaturesToTrack()。偵測器在該區(qū)域中發(fā)現(xiàn)的關(guān)鍵點。然后試著移動你所選擇的區(qū)域,你會看到光學(xué)流函數(shù):calcOpticalFlowPyrLK()跟蹤關(guān)鍵點。

以下是我的運行結(jié)果:

移動后:

下面讓我們來看看代碼:主要是lk_tracker.py腳本

#!/usr/bin/env python""" lk_tracker.py - Version 1.1 2013-12-20 Based on the OpenCV lk_track.py demo codeCreated for the Pi Robot Project: http://www.pirobot.orgCopyright (c) 2011 Patrick Goebel. All rights reserved.This program is free software; you can redistribute it and/or modifyit under the terms of the GNU General Public License as published bythe Free Software Foundation; either version 2 of the License, or(at your option) any later version.This program is distributed in the hope that it will be useful,but WITHOUT ANY WARRANTY; without even the implied warranty ofMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See theGNU General Public License for more details at:http://www.gnu.org/licenses/gpl.html """ import rospy import cv2 import cv2.cv as cv import numpy as np from rbx1_vision.good_features import GoodFeaturesclass LKTracker(GoodFeatures):def __init__(self, node_name):super(LKTracker, self).__init__(node_name)self.show_text = rospy.get_param("~show_text", True)self.feature_size = rospy.get_param("~feature_size", 1)# LK parametersself.lk_winSize = rospy.get_param("~lk_winSize", (10, 10))self.lk_maxLevel = rospy.get_param("~lk_maxLevel", 2)self.lk_criteria = rospy.get_param("~lk_criteria", (cv2.TERM_CRITERIA_EPS | cv2.TERM_CRITERIA_COUNT, 20, 0.01))self.lk_params = dict( winSize = self.lk_winSize, maxLevel = self.lk_maxLevel, criteria = self.lk_criteria) self.detect_interval = 1self.keypoints = Noneself.detect_box = Noneself.track_box = Noneself.mask = Noneself.grey = Noneself.prev_grey = Nonedef process_image(self, cv_image):try:# If we don't yet have a detection box (drawn by the user # with the mouse), keep waitingif self.detect_box is None:return cv_image# Create a greyscale version of the imageself.grey = cv2.cvtColor(cv_image, cv2.COLOR_BGR2GRAY)# Equalize the grey histogram to minimize lighting effectsself.grey = cv2.equalizeHist(self.grey)# If we haven't yet started tracking, set the track box to the # detect box and extract the keypoints within itif self.track_box is None or not self.is_rect_nonzero(self.track_box):self.track_box = self.detect_boxself.keypoints = self.get_keypoints(self.grey, self.track_box)else:if self.prev_grey is None:self.prev_grey = self.grey# Now that have keypoints, track them to the next frame# using optical flowself.track_box = self.track_keypoints(self.grey, self.prev_grey)# Process any special keyboard commands for this moduleif self.keystroke != -1:try:cc = chr(self.keystroke & 255).lower()if cc == 'c':# Clear the current keypointsself.keypoints = Noneself.track_box = Noneself.detect_box = Noneexcept:passself.prev_grey = self.greyexcept:passreturn cv_image def track_keypoints(self, grey, prev_grey):# We are tracking points between the previous frame and the# current frameimg0, img1 = prev_grey, grey# Reshape the current keypoints into a numpy array required# by calcOpticalFlowPyrLK()p0 = np.float32([p for p in self.keypoints]).reshape(-1, 1, 2)# Calculate the optical flow from the previous frame to the current framep1, st, err = cv2.calcOpticalFlowPyrLK(img0, img1, p0, None, **self.lk_params)# Do the reverse calculation: from the current frame to the previous frametry:p0r, st, err = cv2.calcOpticalFlowPyrLK(img1, img0, p1, None, **self.lk_params)# Compute the distance between corresponding points in the two flowsd = abs(p0-p0r).reshape(-1, 2).max(-1)# If the distance between pairs of points is < 1 pixel, set# a value in the "good" array to True, otherwise Falsegood = d < 1# Initialize a list to hold new keypointsnew_keypoints = list()# Cycle through all current and new keypoints and only keep# those that satisfy the "good" condition abovefor (x, y), good_flag in zip(p1.reshape(-1, 2), good):if not good_flag:continuenew_keypoints.append((x, y))# Draw the keypoint on the imagecv2.circle(self.marker_image, (x, y), self.feature_size, (0, 255, 0, 0), cv.CV_FILLED, 8, 0)# Set the global keypoint list to the new list self.keypoints = new_keypoints# Convert the keypoints list to a numpy arraykeypoints_array = np.float32([p for p in self.keypoints]).reshape(-1, 1, 2) # If we have enough points, find the best fit ellipse around themif len(self.keypoints) > 6:track_box = cv2.fitEllipse(keypoints_array)else:# Otherwise, find the best fitting rectangletrack_box = cv2.boundingRect(keypoints_array)except:track_box = Nonereturn track_boxif __name__ == '__main__':try:node_name = "lk_tracker"LKTracker(node_name)rospy.spin()except KeyboardInterrupt:print "Shutting down LK Tracking node."cv.DestroyAllWindows()

轉(zhuǎn)載于:https://www.cnblogs.com/zxouxuewei/p/5409961.html

總結(jié)

以上是生活随笔為你收集整理的利用光学流跟踪关键点---30的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。