[Tracking] KCF + KalmanFilter目标跟踪
基于KCF和MobileNet V2以及KalmanFilter的攝像頭監(jiān)測系統(tǒng)
簡介
這是一次作業(yè)。Tracking這一塊落后Detection很多年了,一般認為Detection做好了,那么只要能夠做的足夠快,就能達到Tracking的效果了,實則不然,現(xiàn)在最快的我認為就是一些可以在手機等arm下使用的輕量神經網絡了,但是其犧牲了準確性,依然達不到追蹤的效果,因為你無法將多次識別的Object視為統(tǒng)一對象畫出運動軌跡。Tracking與Detection的根本區(qū)別在于Tracking可以很快的識別,因為基本上只需要識別一次,然后調用跟蹤算法對目標進行跟進就可以了,而跟蹤算法只需要在目標所在位置附近進行搜‘索判斷是否存在目標就可以了,不用像Detection那樣整張圖遍歷來尋找目標。而本篇也是使用的是輕量神經網絡MobileNet和MIL以及KCF追蹤算法,兩者結合使用,達到了一定的追蹤效果。
由于比賽需要,我改了一改,做成了robomaster的追蹤程序。。
難點&解決
多個人的追蹤
對于同時能夠追蹤多目標,我的想法是寫一個Person類,實例化出不同的person對象,對個person對象只完成自己對象本身的追蹤工作,不會干擾到其他對象的追蹤。在對象走出視頻流區(qū)域一段時間(可以作為參數(shù)設置)后將對象銷毀。對于初始化問題,可以采用MobieNet的訓練結果進行對象參數(shù)的初始化,當視頻流中沒有目標時只運行MobileNet的識別即可。識別必須每隔一段時間進行一次,將那些識別出的人與我們已經實例化出的人進行距離比較,相當于一個匹配,已有的保留,沒有的再繼續(xù)進行實例化,然后追蹤。
人從各個方向運動的追蹤
由于人可能從各個方向進入視頻流,因此對分類器的要求會比較高,所以我們需要訓練大量的實地場景的各種姿態(tài)進入視頻流的圖片,當然,由于宿舍空間有限,我沒辦法做出人從監(jiān)視器各個角度各個方向進出的訓練集。因此這個問題其實有待解決,但是我覺得可以通過豐富訓練集來解決(廢話)。
人遮擋狀態(tài)下的追蹤
對于遮擋狀態(tài)下的追蹤,我打算這么解決,遮擋首先分為短時間遮擋和長時間遮擋,對于短時間遮擋我們可以采用消失計時的方法,設置一個閾值,在消失閾值范圍內輸出原框,或者原來有速度進行一個預測,但是預測肯定會出問題,因為預測是按照前一幀的速度來預測的,因此預測的框會一直按照速度方向平移,所以速度應該在預測的時候逐漸減小,這樣才能避免一直有速度的預測。還有就是可以通過卡爾曼濾波來做,這個我有做的打算,正在研究他的論文。這個問題屬于Long-Term-Tracking問題,現(xiàn)有的方法有的是采用分塊識別,就是分別識別人的某一部分,然后把識別到的結果合起來以及其他方法,具體還在看。
光照條件變化時候的追蹤
對于光照變化的時候的追蹤,我覺得這就完全可以交給我們的神經網絡,神經網絡提取的特征是可以保證在多尺度和各種光照條件下實現(xiàn)較高準確度的分類的,因此,在光照較暗和光照較強的條件下我們的神經網絡都可以取得比較好的效果,因此是可以完成識別的任務的。
KCF & KalmanFilter
KCF
KCF算法是核相關濾波的簡稱,利用循環(huán)移位進行稠密采樣,FFT快速變換進行分類器的訓練,同時結合了多通道的HOG特征,大致的流程是,先利用循環(huán)矩陣不斷對圖像移位,得到多個樣本,在第t幀中的當前位置附近利用這些樣本訓練一個分類器,這個分類器可以對框中是否有人做出一個概率響應,因此當我們來到下一幀的時候呢,先用循環(huán)矩陣對前一幀的區(qū)域進行循環(huán)移位得到若干樣本,然后用前一幀訓練的分類器分類得到輸出響應,以響應最大的作為預測位置,然后再訓練,再預測。這個算法的推導我會專門寫一篇博客。
Kalman波波
狀態(tài)方程:
測量方程:
xk是狀態(tài)向量,zk是測量向量,Ak是狀態(tài)轉移矩陣,uk是控制向量,Bk是控制矩陣,wk是系統(tǒng)誤差(噪聲),Hk是測量矩陣,vk是測量誤差(噪聲)。wk和vk都是高斯噪聲,即
實際應用的推導過程如下:
使用
關于使用KCF,我是寫了個類,這樣可以做多目標的跟蹤,不然就只能單目標啦。而且加入了kalmanfilter來預測并且修正觀測值。
class Person:def __init__(self, bg, bbox,delta_time = 0.2 ,acc = 2):self._zs = 0self._bbox = bboxself._tracker = cv.TrackerKCF_create()self._center = (int(bbox[0]+bbox[2]/2), int(bbox[1]+bbox[3]/2))self._mask = np.zeros(bg.shape, dtype = np.uint8)self._shape = bg.shapeself._no_time = 0self._tracker.init(bg,bbox)self._frame = bgself._predicted = Noneself.kalman = cv.KalmanFilter(4,2,0)# 狀態(tài)空間4D 分別是x y vx vy,測量空間2D 分別是 x yself.kalman.transitionMatrix = np.array([[1,0,delta_time,0],[0,1,0,delta_time],[0,0,1,0],[0,0,0,1]],dtype = np.float32)self.kalman.measurementMatrix = np.array([[1,0,0,0],[0,1,0,0]],dtype = np.float32)self.kalman.statePre = np.array([[self._center[0]],[self._center[1]],[0],[0]],dtype = np.float32)self.kalman.statePost = np.array([[self._center[0]],[self._center[1]],[0],[0]],dtype = np.float32)self.kalman.processNoiseCov = acc * np.array([[0.25*delta_time**4,0,0.5*delta_time**3,0],[0,0.25*delta_time**4,0,0.5*delta_time**3],[0.5*delta_time**3,0,delta_time**2,0],[0,0.5*delta_time**3,0,delta_time**2]],dtype = np.float32)def update(self,new_bbox,center):self._bbox = new_bboxself._center = centerdef precess(self,src):self._zs = self._zs + 1h,w = self._shape[:2]frame = copy.copy(src)padding = 5 # paddingret, bbox = self._tracker.update(frame) # bbox: x y w hp1,p2 = (int(bbox[0]),int(bbox[1])),(int(bbox[0])+int(bbox[2]),int(bbox[1])+int(bbox[3]))center = (int((p1[0]+p2[0])/2),int((p1[1]+p2[1])/2))global person_countif self._no_time == 20:self._no_time = 0self._mask = np.zeros(self._shape,dtype=np.uint8)self._frame = srcreturn (False,src)if ret and p1[0]>=padding and p1[1]<= (w-padding):#and int(bbox[0])>=padding and int(bbox[0] + bbox[2])<= (w-padding) #and int(bbox[1])>=padding and int(bbox[1] + bbox[3])<=(h-padding)self._no_time = 0s = np.array([[np.float32(center[0])],[np.float32(center[1])]])self.kalman.correct(s)center = self.kalman.predict().astype(np.int)#print(center[0],center[1])center = (center[0,0],center[1,0])cv.line(self._mask,self._center,center,(255,255,0),2)mmask = cv.cvtColor(self._mask.astype(np.uint8),cv.COLOR_BGR2GRAY)mmask = cv.bitwise_not(mmask)self._frame = cv.add(frame, self._mask, mask = mmask)self.update(bbox,center)#self._predicted = [self._bbox[i]+self._speed[i] if i<2 else self._bbox[i] for i in range(4)]#predict_1,predict_2 = (int(self._predicted[0]),int(self._predicted[1])),(int(self._predicted[0])+int(self._predicted[2]),int(self._predicted[1])+int(self._predicted[3]))#cv.rectangle(self._frame,predict_1,predict_2,(0,255,255),2,1) # 畫預測框#cv.putText(self._frame,"predicted",predict_1,cv.FONT_HERSHEY_SIMPLEX,0.5,(0,255,255),2)cv.rectangle(self._frame, p1, p2, (255, 0, 0), 2, 1) # 畫識別框cv.putText(self._frame,"recognized",p2,cv.FONT_HERSHEY_SIMPLEX,0.5,(255,0,0),2)#cv.waitKey(10)return (True,self._frame)else:ret,bbox = recg_car(frame)if ret:p1,p2 = (int(bbox[0]),int(bbox[1])),(int(bbox[0])+int(bbox[2]),int(bbox[1])+int(bbox[3]))center = (int((p1[0]+p2[0])/2),int((p1[1]+p2[1])/2))s = np.array([[np.float32(center[0])],[np.float32(center[1])]])self.kalman.correct(s)center = self.kalman.predict().astype(np.int)center = (center[0,0],center[1,0])cv.line(self._mask,self._center,center,(255,255,0),2)mmask = cv.cvtColor(self._mask.astype(np.uint8),cv.COLOR_BGR2GRAY)mmask = cv.bitwise_not(mmask)self._frame = cv.add(frame, self._mask, mask = mmask)self.update(bbox,center)cv.rectangle(self._frame, p1, p2, (255, 0, 0), 2, 1) # 畫識別框cv.putText(self._frame,"recognized",p2,cv.FONT_HERSHEY_SIMPLEX,0.5,(255,0,0),2)return (True,self._frame)else:self._no_time = self._no_time + 1mmask = cv.cvtColor(self._mask.astype(np.uint8),cv.COLOR_BGR2GRAY)mmask = cv.bitwise_not(mmask)self._frame = cv.add(frame, self._mask, mask = mmask)return (True,self._frame)
使用MobileNet V2模型進行訓練和預測
關于訓練過程我就不一一介紹了,之前的博客也有寫到怎么做,直接貼代碼(完整的)。
import cv2 as cv
import sys
import numpy as np
import os
import copy
import tensorflow as tf
sys.path.append("..")
from utils import label_map_util
from utils import visualization_utils as vis_util
DEBUG = False # 表示不是調試模式
THRE_VAL = 0.4 # 這里設置的是置信度的閾值,如果大于這個閾值就在圖像里面把他給框出來
# ['BOOSTING', 'MIL', 'KCF', 'TLD', 'MEDIANFLOW', 'GOTURN']
#tracker = cv.TrackerKCF_create()
#tracker = cv.TrackerMIL_create()
PATH_TO_CKPT ='/home/xueaoru/trace/car/frozen_inference_graph.pb' #網絡結構配置文件
PATH_TO_LABELS = '/home/xueaoru/trace/car/label_map.pbtxt' # 標簽映射關系配置文件
NUM_CLASSES = 2 # 分類數(shù)目
label_map = label_map_util.load_labelmap(PATH_TO_LABELS) # 調用函數(shù)加載labelmap,相當于把文本轉換成了json文件
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
# 上面一句是把每個labelmap格式的數(shù)據(jù)轉換為dict類型的數(shù)據(jù),每隔id對應一個輸出的name
category_index = label_map_util.create_category_index(categories) # 得到每個id,也就是key
detection_graph = tf.Graph() #加載默認圖
with detection_graph.as_default():od_graph_def = tf.GraphDef()with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:# 加載網絡模型serialized_graph = fid.read()od_graph_def.ParseFromString(serialized_graph)tf.import_graph_def(od_graph_def, name='')sess = tf.Session(graph=detection_graph)# 運行開啟session
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
cap = cv.VideoCapture("/home/xueaoru/下載/red1.mp4")
#cap = cv.VideoCapture(1)
person_exist = False
class Person:def __init__(self, bg, bbox,delta_time = 0.2 ,acc = 2):self._zs = 0self._bbox = bboxself._tracker = cv.TrackerKCF_create()self._center = (int(bbox[0]+bbox[2]/2), int(bbox[1]+bbox[3]/2))self._mask = np.zeros(bg.shape, dtype = np.uint8)self._shape = bg.shapeself._no_time = 0self._tracker.init(bg,bbox)self._frame = bgself._predicted = Noneself.kalman = cv.KalmanFilter(4,2,0)# 狀態(tài)空間4D 分別是x y vx vy,測量空間2D 分別是 x yself.kalman.transitionMatrix = np.array([[1,0,delta_time,0],[0,1,0,delta_time],[0,0,1,0],[0,0,0,1]],dtype = np.float32)self.kalman.measurementMatrix = np.array([[1,0,0,0],[0,1,0,0]],dtype = np.float32)self.kalman.statePre = np.array([[self._center[0]],[self._center[1]],[0],[0]],dtype = np.float32)self.kalman.statePost = np.array([[self._center[0]],[self._center[1]],[0],[0]],dtype = np.float32)self.kalman.processNoiseCov = acc * np.array([[0.25*delta_time**4,0,0.5*delta_time**3,0],[0,0.25*delta_time**4,0,0.5*delta_time**3],[0.5*delta_time**3,0,delta_time**2,0],[0,0.5*delta_time**3,0,delta_time**2]],dtype = np.float32)def update(self,new_bbox,center):self._bbox = new_bboxself._center = centerdef precess(self,src):self._zs = self._zs + 1h,w = self._shape[:2]frame = copy.copy(src)padding = 5 # paddingret, bbox = self._tracker.update(frame) # bbox: x y w hp1,p2 = (int(bbox[0]),int(bbox[1])),(int(bbox[0])+int(bbox[2]),int(bbox[1])+int(bbox[3]))center = (int((p1[0]+p2[0])/2),int((p1[1]+p2[1])/2))global person_countif self._no_time == 20:self._no_time = 0self._mask = np.zeros(self._shape,dtype=np.uint8)self._frame = srcreturn (False,src)if ret and p1[0]>=padding and p1[1]<= (w-padding):#and int(bbox[0])>=padding and int(bbox[0] + bbox[2])<= (w-padding) #and int(bbox[1])>=padding and int(bbox[1] + bbox[3])<=(h-padding)self._no_time = 0s = np.array([[np.float32(center[0])],[np.float32(center[1])]])self.kalman.correct(s)center = self.kalman.predict().astype(np.int)#print(center[0],center[1])center = (center[0,0],center[1,0])cv.line(self._mask,self._center,center,(255,255,0),2)mmask = cv.cvtColor(self._mask.astype(np.uint8),cv.COLOR_BGR2GRAY)mmask = cv.bitwise_not(mmask)self._frame = cv.add(frame, self._mask, mask = mmask)self.update(bbox,center)#self._predicted = [self._bbox[i]+self._speed[i] if i<2 else self._bbox[i] for i in range(4)]#predict_1,predict_2 = (int(self._predicted[0]),int(self._predicted[1])),(int(self._predicted[0])+int(self._predicted[2]),int(self._predicted[1])+int(self._predicted[3]))#cv.rectangle(self._frame,predict_1,predict_2,(0,255,255),2,1) # 畫預測框#cv.putText(self._frame,"predicted",predict_1,cv.FONT_HERSHEY_SIMPLEX,0.5,(0,255,255),2)cv.rectangle(self._frame, p1, p2, (255, 0, 0), 2, 1) # 畫識別框cv.putText(self._frame,"recognized",p2,cv.FONT_HERSHEY_SIMPLEX,0.5,(255,0,0),2)#cv.waitKey(10)return (True,self._frame)else:ret,bbox = recg_car(frame)if ret:p1,p2 = (int(bbox[0]),int(bbox[1])),(int(bbox[0])+int(bbox[2]),int(bbox[1])+int(bbox[3]))center = (int((p1[0]+p2[0])/2),int((p1[1]+p2[1])/2))s = np.array([[np.float32(center[0])],[np.float32(center[1])]])self.kalman.correct(s)center = self.kalman.predict().astype(np.int)center = (center[0,0],center[1,0])cv.line(self._mask,self._center,center,(255,255,0),2)mmask = cv.cvtColor(self._mask.astype(np.uint8),cv.COLOR_BGR2GRAY)mmask = cv.bitwise_not(mmask)self._frame = cv.add(frame, self._mask, mask = mmask)self.update(bbox,center)cv.rectangle(self._frame, p1, p2, (255, 0, 0), 2, 1) # 畫識別框cv.putText(self._frame,"recognized",p2,cv.FONT_HERSHEY_SIMPLEX,0.5,(255,0,0),2)return (True,self._frame)else:self._no_time = self._no_time + 1mmask = cv.cvtColor(self._mask.astype(np.uint8),cv.COLOR_BGR2GRAY)mmask = cv.bitwise_not(mmask)self._frame = cv.add(frame, self._mask, mask = mmask)return (True,self._frame)def recg_car(frame):image_expanded = np.expand_dims(frame, axis=0)(boxes, scores, classes, num) = sess.run([detection_boxes, detection_scores, detection_classes, num_detections],feed_dict={image_tensor: image_expanded})score = np.squeeze(scores)max_index = np.argmax(score)score = score[max_index]# print(score)if score > THRE_VAL:box = np.squeeze(boxes)[max_index]#(ymin,xmin,ymax,xmax)h,w,_ = frame.shapemin_point = (int(box[1]*w),int(box[0]*h))max_point = (int(box[3]*w),int(box[2]*h))bbox = (min_point[0], min_point[1], max_point[0]-min_point[0], max_point[1] - min_point[1])return True,bboxelse:return False,None
ret, frame = cap.read()
if not ret:print("err")sys.exit()
ret,bbox = recg_car(frame)
person = Person(frame,bbox)
while True:ret,frame = cap.read()time = cv.getTickCount()if not ret:breakperson_exist,frame = person.precess(frame)cv.imshow("frame",frame)time = cv.getTickCount() - timeprint("處理時間:"+str(time*1000/cv.getTickFrequency())+"ms")key = cv.waitKey(1) & 0xffif key ==27:break
cap.release()
cv.destroyAllWindows()
轉載于:https://www.cnblogs.com/aoru45/p/10281739.html
總結
以上是生活随笔為你收集整理的[Tracking] KCF + KalmanFilter目标跟踪的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: WEBSHELL恶意代码批量提取清除工具
- 下一篇: 求一个带睿的好听的女孩名字。