日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

基于OpenCV的车辆计数(一)

發(fā)布時(shí)間:2024/3/24 编程问答 21 豆豆
生活随笔 收集整理的這篇文章主要介紹了 基于OpenCV的车辆计数(一) 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

全部代碼鏈接

首先效果圖:

語言:python + OpenCV

簡(jiǎn)單的背景減差算法實(shí)現(xiàn)移動(dòng)物體檢測(cè)。

學(xué)習(xí)目標(biāo):

1. 理解背景減差算法

2. OpenCV圖像濾波

3. 通過連通區(qū)域檢測(cè)目標(biāo)

前景 = 當(dāng)前幀 - 背景

? ? ? ? 但在某些情況下,我們不能得到靜態(tài)幀,因?yàn)楣饪梢愿淖?#xff0c;或者一些物體會(huì)被某人移動(dòng),或者總是存在運(yùn)動(dòng),等等。在這種情況下,我們保存了一些幀,并試圖找出其中大多數(shù)像素是相同的,然后這個(gè)像素變?yōu)楸尘暗囊徊糠帧R话銇碚f,我們?nèi)绾蔚玫竭@個(gè)背景,以及我們用來選擇更合適的濾波器。

在這個(gè)例子中,我們使用MOG(混合高斯)進(jìn)行背景減差,效果:

可以看出一些噪聲,和陰影,我們使用標(biāo)準(zhǔn)的濾波器去移除他們。

import os import logging import logging.handlers import randomimport numpy as np import skvideo.io import cv2 import matplotlib.pyplot as pltimport utils # without this some strange errors happen cv2.ocl.setUseOpenCL(False) random.seed(123)# ============================================================================ IMAGE_DIR = "./out" VIDEO_SOURCE = "input.mp4" SHAPE = (720, 1280) # HxW # ============================================================================def train_bg_subtractor(inst, cap, num=500):'''BG substractor need process some amount of frames to start giving result'''print ('Training BG Subtractor...')i = 0for frame in cap:inst.apply(frame, None, 0.001)i += 1if i >= num:return capdef main():log = logging.getLogger("main")# creting MOG bg subtractor with 500 frames in cache# and shadow detctionbg_subtractor = cv2.createBackgroundSubtractorMOG2(history=500, detectShadows=True)# Set up image source# You can use also CV2, for some reason it not working for mecap = skvideo.io.vreader(VIDEO_SOURCE)# skipping 500 frames to train bg subtractortrain_bg_subtractor(bg_subtractor, cap, num=500)frame_number = -1for frame in cap:if not frame.any():log.error("Frame capture failed, stopping...")breakframe_number += 1utils.save_frame(frame, "./out/frame_%04d.png" % frame_number)fg_mask = bg_subtractor.apply(frame, None, 0.001)utils.save_frame(frame, "./out/fg_mask_%04d.png" % frame_number)# ============================================================================if __name__ == "__main__":log = utils.init_logging()if not os.path.exists(IMAGE_DIR):log.debug("Creating image directory `%s`...", IMAGE_DIR)os.makedirs(IMAGE_DIR)main()

濾波

對(duì)于這個(gè)例子,我們用到了下列濾波器:

閾值、膨脹、腐蝕、開運(yùn)算、閉運(yùn)算

現(xiàn)在我們將使用它們來消除前景掩碼上的一些噪聲。
首先,我們將使用閉運(yùn)算來消除區(qū)域的空隙,然后開運(yùn)算以移除1到2個(gè)PX點(diǎn),然后在膨脹處理使目標(biāo)更大。

def filter_mask(img):kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2))# Fill any small holesclosing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)# Remove noiseopening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel)# Dilate to merge adjacent blobsdilation = cv2.dilate(opening, kernel, iterations=2)# thresholdth = dilation[dilation < 240] = 0return th

前景如下:

通過檢測(cè)連通區(qū)域進(jìn)行目標(biāo)檢測(cè)

我們使用cv2.findContours函數(shù)實(shí)現(xiàn);

cv2.CV_RETR_EXTERNAL — get only outer contours. cv2.CV_CHAIN_APPROX_TC89_L1 - use Teh-Chin chain approximation algorithm (faster) def get_centroid(x, y, w, h):x1 = int(w / 2)y1 = int(h / 2)cx = x + x1cy = y + y1return (cx, cy)def detect_vehicles(fg_mask, min_contour_width=35, min_contour_height=35):matches = []# finding external contoursim, contours, hierarchy = cv2.findContours(fg_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1)# filtering by with, heightfor (i, contour) in enumerate(contours):(x, y, w, h) = cv2.boundingRect(contour)contour_valid = (w >= min_contour_width) and (h >= min_contour_height)if not contour_valid:continue# getting center of the bounding boxcentroid = get_centroid(x, y, w, h)matches.append(((x, y, w, h), centroid))return matches

我們通過高度、寬度和添加質(zhì)心來添加一些過濾

封裝,算法實(shí)現(xiàn)一般流程:

class PipelineRunner(object):'''Very simple pipline.Just run passed processors in order with passing context from one to another.You can also set log level for processors.'''def __init__(self, pipeline=None, log_level=logging.DEBUG):self.pipeline = pipeline or []self.context = {}self.log = logging.getLogger(self.__class__.__name__)self.log.setLevel(log_level)self.log_level = log_levelself.set_log_level()def set_context(self, data):self.context = datadef add(self, processor):if not isinstance(processor, PipelineProcessor):raise Exception('Processor should be an isinstance of PipelineProcessor.')processor.log.setLevel(self.log_level)self.pipeline.append(processor)def remove(self, name):for i, p in enumerate(self.pipeline):if p.__class__.__name__ == name:del self.pipeline[i]return Truereturn Falsedef set_log_level(self):for p in self.pipeline:p.log.setLevel(self.log_level)def run(self):for p in self.pipeline:self.context = p(self.context)self.log.debug("Frame #%d processed.", self.context['frame_number'])return self.contextclass PipelineProcessor(object):'''Base class for processors.'''def __init__(self):self.log = logging.getLogger(self.__class__.__name__)

作為輸入構(gòu)造函數(shù)將采取將按順序運(yùn)行的處理器列表。每個(gè)處理器都是這項(xiàng)工作的一部分。讓我們創(chuàng)建輪廓檢測(cè)處理器。

class ContourDetection(PipelineProcessor):'''Detecting moving objects.Purpose of this processor is to subtrac background, get moving objectsand detect them with a cv2.findContours method, and then filter off-bywidth and height. bg_subtractor - background subtractor isinstance.min_contour_width - min bounding rectangle width.min_contour_height - min bounding rectangle height.save_image - if True will save detected objects mask to file.image_dir - where to save images(must exist). '''def __init__(self, bg_subtractor, min_contour_width=35, min_contour_height=35, save_image=False, image_dir='images'):super(ContourDetection, self).__init__()self.bg_subtractor = bg_subtractorself.min_contour_width = min_contour_widthself.min_contour_height = min_contour_heightself.save_image = save_imageself.image_dir = image_dirdef filter_mask(self, img, a=None):'''This filters are hand-picked just based on visual tests'''kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2, 2))# Fill any small holesclosing = cv2.morphologyEx(img, cv2.MORPH_CLOSE, kernel)# Remove noiseopening = cv2.morphologyEx(closing, cv2.MORPH_OPEN, kernel)# Dilate to merge adjacent blobsdilation = cv2.dilate(opening, kernel, iterations=2)return dilationdef detect_vehicles(self, fg_mask, context):matches = []# finding external contoursim2, contours, hierarchy = cv2.findContours(fg_mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_TC89_L1)for (i, contour) in enumerate(contours):(x, y, w, h) = cv2.boundingRect(contour)contour_valid = (w >= self.min_contour_width) and (h >= self.min_contour_height)if not contour_valid:continuecentroid = utils.get_centroid(x, y, w, h)matches.append(((x, y, w, h), centroid))return matchesdef __call__(self, context):frame = context['frame'].copy()frame_number = context['frame_number']fg_mask = self.bg_subtractor.apply(frame, None, 0.001)# just thresholding valuesfg_mask[fg_mask < 240] = 0fg_mask = self.filter_mask(fg_mask, frame_number)if self.save_image:utils.save_frame(fg_mask, self.image_dir +"/mask_%04d.png" % frame_number, flip=False)context['objects'] = self.detect_vehicles(fg_mask, context)context['fg_mask'] = fg_maskreturn contex

所以只需將BG減法、濾波和檢測(cè)部分合并在一起即可。
現(xiàn)在讓我們創(chuàng)建一個(gè)處理器,它將鏈接到不同幀上的檢測(cè)對(duì)象,并創(chuàng)建路徑,也將計(jì)算到達(dá)出口區(qū)域的車輛。

'''Counting vehicles that entered in exit zone.Purpose of this class based on detected object and local cache createobjects pathes and count that entered in exit zone defined by exit masks.exit_masks - list of the exit masks.path_size - max number of points in a path.max_dst - max distance between two points.'''def __init__(self, exit_masks=[], path_size=10, max_dst=30, x_weight=1.0, y_weight=1.0):super(VehicleCounter, self).__init__()self.exit_masks = exit_masksself.vehicle_count = 0self.path_size = path_sizeself.pathes = []self.max_dst = max_dstself.x_weight = x_weightself.y_weight = y_weightdef check_exit(self, point):for exit_mask in self.exit_masks:try:if exit_mask[point[1]][point[0]] == 255:return Trueexcept:return Truereturn Falsedef __call__(self, context):objects = context['objects']context['exit_masks'] = self.exit_maskscontext['pathes'] = self.pathescontext['vehicle_count'] = self.vehicle_countif not objects:return contextpoints = np.array(objects)[:, 0:2]points = points.tolist()# add new points if pathes is emptyif not self.pathes:for match in points:self.pathes.append([match])else:# link new points with old pathes based on minimum distance between# pointsnew_pathes = []for path in self.pathes:_min = 999999_match = Nonefor p in points:if len(path) == 1:# distance from last point to currentd = utils.distance(p[0], path[-1][0])else:# based on 2 prev points predict next point and calculate# distance from predicted next point to currentxn = 2 * path[-1][0][0] - path[-2][0][0]yn = 2 * path[-1][0][1] - path[-2][0][1]d = utils.distance(p[0], (xn, yn),x_weight=self.x_weight,y_weight=self.y_weight)if d < _min:_min = d_match = pif _match and _min <= self.max_dst:points.remove(_match)path.append(_match)new_pathes.append(path)# do not drop path if current frame has no matchesif _match is None:new_pathes.append(path)self.pathes = new_pathes# add new pathesif len(points):for p in points:# do not add points that already should be countedif self.check_exit(p[1]):continueself.pathes.append([p])# save only last N points in pathfor i, _ in enumerate(self.pathes):self.pathes[i] = self.pathes[i][self.path_size * -1:]# count vehicles and drop counted pathes:new_pathes = []for i, path in enumerate(self.pathes):d = path[-2:]if (# need at list two points to countlen(d) >= 2 and# prev point not in exit zonenot self.check_exit(d[0][1]) and# current point in exit zoneself.check_exit(d[1][1]) and# path len is bigger then minself.path_size <= len(path)):self.vehicle_count += 1else:# prevent linking with path that already in exit zoneadd = Truefor p in path:if self.check_exit(p[1]):add = Falsebreakif add:new_pathes.append(path)self.pathes = new_pathescontext['pathes'] = self.pathescontext['objects'] = objectscontext['vehicle_count'] = self.vehicle_countself.log.debug('#VEHICLES FOUND: %s' % self.vehicle_count)return context

?

總結(jié)

以上是生活随笔為你收集整理的基于OpenCV的车辆计数(一)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。