传统手工特征--opencv
一,顏色特征:
簡單點來說就是將一幅圖上的各個像素點顏色統計出來,適用顏色空間:RGB,HSV等顏色空間,
具體操作:量化顏色空間,每個單元(bin)由單元中心代表,統計落在量化單元上的像素數量
量化顏色直方圖(HSV空間)
缺點:稀疏,量化問題
聚類顏色直方圖:
適用顏色空間:Lab等顏色空間
操作:使用聚類算法對所有像素點顏色向量進行聚類
單元(bin)由聚類中心代表
解決稀疏問題
二,幾何特征
邊緣:像素明顯變化的區域,含有豐富的語義信息
邊緣定義:像素值快速變化的區域
邊緣提取:
先噪聲處理,高斯去噪,在使用一階導數獲取極值
高斯濾波一階求導:
梯度變化最快方向:
三,基于特征點的特征描述子
不同的觀測方式,物體的大小,形狀,明暗會有不同,依然可以判斷為同一物體
Harris角點(corner):
在任何方向上移動小觀察窗,導致大的像素變動
代碼:
def harris_corner():import numpy as npimport cv2filename = './data/chessboard.png'img = cv2.imread(filename)img=cv2.resize(img,(200,200))gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)gray = np.float32(gray)dst = cv2.cornerHarris(gray, 2, 3, 0.04)# result is dilated for marking the corners, not importantdst = cv2.dilate(dst, None)# Threshold for an optimal value, it may vary depending on the image.img[dst > 0.01 * dst.max()] = [0, 0, 255]cv2.imshow('dst', img)if cv2.waitKey(0) & 0xff == 27:cv2.destroyAllWindows()打印結果:
(1)SIFT特征:基于尺度空間不變的特征,4×4網格,8方向直方圖,總共128維特征向量
特點:具有良好的不變性,少數物體也能產生大量SIFT特征
代碼: def sift():import numpy as npimport cv2img = cv2.imread('./data/home.jpg')gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)sift = cv2.xfeatures2d.SIFT_create()kp = sift.detect(gray, None)img = cv2.drawKeypoints(gray, kp, img)cv2.imshow("SIFT", img)cv2.imwrite('sift_keypoints.jpg', img)cv2.waitKey(0)cv2.destroyAllWindows()結果:
(2)LBP(局部二值模式):每個像素點與周圍點大小比較,多個bit組成一個數,統計每個數的直方圖,
LBP特征具有灰度不變性和旋轉不變性等顯著優點。
(3)SURF,為了保證旋轉不變性,在SURF中,統計特征點領域內的Harr小波特征。
代碼:
def surf():import numpy as npimport cv2img = cv2.imread('./data/butterfly.jpg', 0)surf = cv2.xfeatures2d.SURF_create(400)# kp, des = surf.detectAndCompute(img,None)surf.setHessianThreshold(50000)kp, des = surf.detectAndCompute(img, None)img2 = cv2.drawKeypoints(img, kp, None, (255, 0, 0), 4)cv2.imshow('surf', img2)cv2.waitKey(0)cv2.destroyAllWindows()(4)ORB特征基于FAST角點的特征點檢測
def orb():import numpy as npimport cv2 as cvimport matplotlib.pyplot as pltimg1 = cv.imread('./data/box.png', 0) # queryImageimg2 = cv.imread('./data/box_in_scene.png', 0) # trainImage# Initiate ORB detectororb = cv.ORB_create()# find the keypoints and descriptors with ORBkp1, des1 = orb.detectAndCompute(img1, None)kp2, des2 = orb.detectAndCompute(img2, None)# create BFMatcher objectbf = cv.BFMatcher(cv.NORM_HAMMING, crossCheck=True)# Match descriptors.matches = bf.match(des1, des2)# Sort them in the order of their distance.matches = sorted(matches, key=lambda x: x.distance)# Draw first 10 matches.img3 = cv.drawMatches(img1, kp1, img2, kp2, matches[:20], None, flags=2)plt.imshow(img3), plt.show()(5)Gabor濾波:用于邊緣提取的線性濾波器,三角函數+高斯函數=Gabor濾波器
基于sift拼接:Stitcher.py
import numpy as np import cv2class Stitcher:#拼接函數def stitch(self, images, ratio=0.75, reprojThresh=4.0,showMatches=False):#獲取輸入圖片(imageB, imageA) = images#檢測A、B圖片的SIFT關鍵特征點,并計算特征描述子(kpsA, featuresA) = self.detectAndDescribe(imageA)(kpsB, featuresB) = self.detectAndDescribe(imageB)# 匹配兩張圖片的所有特征點,返回匹配結果M = self.matchKeypoints(kpsA, kpsB, featuresA, featuresB, ratio, reprojThresh)# 如果返回結果為空,沒有匹配成功的特征點,退出算法if M is None:return None# 否則,提取匹配結果# H是3x3視角變換矩陣 (matches, H, status) = M# 將圖片A進行視角變換,result是變換后圖片result = cv2.warpPerspective(imageA, H, (imageA.shape[1] + imageB.shape[1], imageA.shape[0]))# 將圖片B傳入result圖片最左端result[0:imageB.shape[0], 0:imageB.shape[1]] = imageB# 檢測是否需要顯示圖片匹配if showMatches:# 生成匹配圖片vis = self.drawMatches(imageA, imageB, kpsA, kpsB, matches, status)# 返回結果return (result, vis)# 返回匹配結果return resultdef detectAndDescribe(self, image):# 將彩色圖片轉換成灰度圖gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)# 建立SIFT生成器descriptor = cv2.xfeatures2d.SIFT_create()# 檢測SIFT特征點,并計算描述子(kps, features) = descriptor.detectAndCompute(image, None)# 將結果轉換成NumPy數組kps = np.float32([kp.pt for kp in kps])# 返回特征點集,及對應的描述特征return (kps, features)def matchKeypoints(self, kpsA, kpsB, featuresA, featuresB, ratio, reprojThresh):# 建立暴力匹配器matcher = cv2.DescriptorMatcher_create("BruteForce")# 使用KNN檢測來自A、B圖的SIFT特征匹配對,K=2rawMatches = matcher.knnMatch(featuresA, featuresB, 2)matches = []for m in rawMatches:# 當最近距離跟次近距離的比值小于ratio值時,保留此匹配對if len(m) == 2 and m[0].distance < m[1].distance * ratio:# 存儲兩個點在featuresA, featuresB中的索引值matches.append((m[0].trainIdx, m[0].queryIdx))# 當篩選后的匹配對大于4時,計算視角變換矩陣if len(matches) > 4:# 獲取匹配對的點坐標ptsA = np.float32([kpsA[i] for (_, i) in matches])ptsB = np.float32([kpsB[i] for (i, _) in matches])# 計算視角變換矩陣(H, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC, reprojThresh)# 返回結果return (matches, H, status)# 如果匹配對小于4時,返回Nonereturn Nonedef drawMatches(self, imageA, imageB, kpsA, kpsB, matches, status):# 初始化可視化圖片,將A、B圖左右連接到一起(hA, wA) = imageA.shape[:2](hB, wB) = imageB.shape[:2]vis = np.zeros((max(hA, hB), wA + wB, 3), dtype="uint8")vis[0:hA, 0:wA] = imageAvis[0:hB, wA:] = imageB# 聯合遍歷,畫出匹配對for ((trainIdx, queryIdx), s) in zip(matches, status):# 當點對匹配成功時,畫到可視化圖上if s == 1:# 畫出匹配對ptA = (int(kpsA[queryIdx][0]), int(kpsA[queryIdx][1]))ptB = (int(kpsB[trainIdx][0]) + wA, int(kpsB[trainIdx][1]))cv2.line(vis, ptA, ptB, (0, 255, 0), 1)# 返回可視化結果return vis def image_stich():from opencv.Stitcher import Stitcherimport cv2# 讀取拼接圖片imageA = cv2.imread("./data/left_01.png")imageB = cv2.imread("./data/right_01.png")# 把圖片拼接成全景圖stitcher = Stitcher()(result, vis) = stitcher.stitch([imageA, imageB], showMatches=True)# 顯示所有圖片cv2.imshow("Image A", imageA)cv2.imshow("Image B", imageB)cv2.imshow("Keypoint Matches", vis)cv2.imshow("Result", result)cv2.waitKey(0)cv2.destroyAllWindows()總結
以上是生活随笔為你收集整理的传统手工特征--opencv的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 吴恩达作业7:梯度下降优化算法
- 下一篇: React Native实例之房产搜索A