日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

yunyang tensorflow-yolov3 Intel Realsense D435 (并发)使用locals()函数批量配置摄像头运行识别程序并画框(代码记录)(代码示例)

發(fā)布時(shí)間:2025/3/19 编程问答 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 yunyang tensorflow-yolov3 Intel Realsense D435 (并发)使用locals()函数批量配置摄像头运行识别程序并画框(代码记录)(代码示例) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

文章目錄

      • 20191126
      • 20191202-1
      • 20191202-2

20191126

# -*- encoding: utf-8 -*- """ @File : test-使用locals()函數(shù)批量配置攝像頭運(yùn)行識(shí)別程序并畫(huà)框.py @Time : 2019/11/26 11:20 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """import cv2 import numpy as np import tensorflow as tf import core.utils as utils from core.config import cfg from core.yolov3 import YOLOV3 import pyrealsense2 as rsclass YoloTest(object):def __init__(self):# D·C 191111:__C.TEST.INPUT_SIZE = 544self.input_size = cfg.TEST.INPUT_SIZEself.anchor_per_scale = cfg.YOLO.ANCHOR_PER_SCALE# Dontla 191106注釋:初始化class.names文件的字典信息屬性self.classes = utils.read_class_names(cfg.YOLO.CLASSES)# D·C 191115:類數(shù)量屬性self.num_classes = len(self.classes)self.anchors = np.array(utils.get_anchors(cfg.YOLO.ANCHORS))# D·C 191111:__C.TEST.SCORE_THRESHOLD = 0.3self.score_threshold = cfg.TEST.SCORE_THRESHOLD# D·C 191120:__C.TEST.IOU_THRESHOLD = 0.45self.iou_threshold = cfg.TEST.IOU_THRESHOLDself.moving_ave_decay = cfg.YOLO.MOVING_AVE_DECAY# D·C 191120:__C.TEST.ANNOT_PATH = "./data/dataset/Dontla/20191023_Artificial_Flower/test.txt"self.annotation_path = cfg.TEST.ANNOT_PATH# D·C 191120:__C.TEST.WEIGHT_FILE = "./checkpoint/f_g_c_weights_files/yolov3_test_loss=15.8845.ckpt-47"self.weight_file = cfg.TEST.WEIGHT_FILE# D·C 191115:可寫(xiě)標(biāo)記(bool類型值)self.write_image = cfg.TEST.WRITE_IMAGE# D·C 191115:__C.TEST.WRITE_IMAGE_PATH = "./data/detection/"(識(shí)別圖片畫(huà)框并標(biāo)注文本后寫(xiě)入的圖片路徑)self.write_image_path = cfg.TEST.WRITE_IMAGE_PATH# D·C 191116:TEST.SHOW_LABEL設(shè)置為T(mén)rueself.show_label = cfg.TEST.SHOW_LABEL# D·C 191120:創(chuàng)建命名空間“input”with tf.name_scope('input'):# D·C 191120:建立變量(創(chuàng)建占位符開(kāi)辟內(nèi)存空間)self.input_data = tf.placeholder(dtype=tf.float32, name='input_data')self.trainable = tf.placeholder(dtype=tf.bool, name='trainable')model = YOLOV3(self.input_data, self.trainable)self.pred_sbbox, self.pred_mbbox, self.pred_lbbox = model.pred_sbbox, model.pred_mbbox, model.pred_lbbox# D·C 191120:創(chuàng)建命名空間“指數(shù)滑動(dòng)平均”with tf.name_scope('ema'):ema_obj = tf.train.ExponentialMovingAverage(self.moving_ave_decay)# D·C 191120:在允許軟設(shè)備放置的會(huì)話中啟動(dòng)圖形并記錄放置決策。(不懂啥意思。。。)allow_soft_placement=True表示允許tf自動(dòng)選擇可用的GPU和CPUself.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))# D·C 191120:variables_to_restore()用于加載模型計(jì)算滑動(dòng)平均值時(shí)將影子變量直接映射到變量本身self.saver = tf.train.Saver(ema_obj.variables_to_restore())# D·C 191120:用于下次訓(xùn)練時(shí)恢復(fù)模型self.saver.restore(self.sess, self.weight_file)def predict(self, image):# D·C 191107:復(fù)制一份圖片的鏡像,避免對(duì)圖片直接操作改變圖片的內(nèi)在屬性org_image = np.copy(image)# D·C 191107:獲取圖片尺寸org_h, org_w, _ = org_image.shape# D·C 191108:該函數(shù)將源圖結(jié)合input_size,將其轉(zhuǎn)換成預(yù)投喂的方形圖像(作者默認(rèn)544×544,中間為縮小尺寸的源圖,上下空區(qū)域?yàn)榛覉D):image_data = utils.image_preprocess(image, [self.input_size, self.input_size])# D·C 191108:打印維度看看:# print(image_data.shape)# (544, 544, 3)# D·C 191108:創(chuàng)建新軸,不懂要?jiǎng)?chuàng)建新軸干嘛?image_data = image_data[np.newaxis, ...]# D·C 191108:打印維度看看:# print(image_data.shape)# (1, 544, 544, 3)# D·C 191110:三個(gè)box可能存放了預(yù)測(cè)框圖(可能是N多的框,有用的沒(méi)用的重疊的都在里面)的信息(但是打印出來(lái)的值完全看不懂啊喂?)pred_sbbox, pred_mbbox, pred_lbbox = self.sess.run([self.pred_sbbox, self.pred_mbbox, self.pred_lbbox],feed_dict={self.input_data: image_data,self.trainable: False})# D·C 191110:打印三個(gè)box的類型、形狀和值看看:# print(type(pred_sbbox))# print(type(pred_mbbox))# print(type(pred_lbbox))# 都是<class 'numpy.ndarray'># print(pred_sbbox.shape)# print(pred_mbbox.shape)# print(pred_lbbox.shape)# (1, 68, 68, 3, 6)# (1, 34, 34, 3, 6)# (1, 17, 17, 3, 6)# print(pred_sbbox)# print(pred_mbbox)# print(pred_lbbox)# D·C 191110:(-1,6)表示不知道有多少行,反正你給我整成6列,然后concatenate又把它們仨給疊起來(lái),最終得到無(wú)數(shù)個(gè)6列數(shù)組(后面self.num_classes)個(gè)數(shù)存放的貌似是這個(gè)框?qū)儆陬惖母怕?#xff09;pred_bbox = np.concatenate([np.reshape(pred_sbbox, (-1, 5 + self.num_classes)),np.reshape(pred_mbbox, (-1, 5 + self.num_classes)),np.reshape(pred_lbbox, (-1, 5 + self.num_classes))], axis=0)# D·C 191111:打印pred_bbox和它的維度看看:# print(pred_bbox)# print(pred_bbox.shape)# (18207, 6)# D·C 191111:猜測(cè)是第一道過(guò)濾,過(guò)濾掉score_threshold以下的圖片,過(guò)濾完之后少了好多:# D·C 191115:bboxes維度為[n,6],前四列是坐標(biāo),第五列是得分,第六列是對(duì)應(yīng)類下標(biāo)bboxes = utils.postprocess_boxes(pred_bbox, (org_h, org_w), self.input_size, self.score_threshold)# D·C 191111:猜測(cè)是第二道過(guò)濾,過(guò)濾掉iou_threshold以下的圖片:bboxes = utils.nms(bboxes, self.iou_threshold)return bboxesdef dontla_evaluate_detect(self):ctx = rs.context()# 判斷攝像頭是否全部連接cam_num = len(ctx.devices)if cam_num < 2:print('攝像頭未全部連接!')else:for i in range(cam_num):locals()['pipeline' + str(i)] = rs.pipeline()locals()['config' + str(i)] = rs.config()locals()['serial' + str(i)] = ctx.devices[i].get_info(rs.camera_info.serial_number)locals()['config' + str(i)].enable_device(locals()['serial' + str(i)])locals()['config' + str(i)].enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)locals()['config' + str(i)].enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)locals()['pipeline' + str(i)].start(locals()['config' + str(i)])# 創(chuàng)建對(duì)齊對(duì)象(深度對(duì)齊顏色)locals()['align' + str(i)] = rs.align(rs.stream.color)try:while True:for i in range(cam_num):locals()['frames' + str(i)] = locals()['pipeline' + str(i)].wait_for_frames()# 獲取對(duì)齊幀集locals()['aligned_frames' + str(i)] = locals()['align' + str(i)].process(locals()['frames' + str(i)])# 獲取對(duì)齊后的深度幀和彩色幀locals()['aligned_depth_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_depth_frame()locals()['color_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_color_frame()if not locals()['aligned_depth_frame' + str(i)] or not locals()['color_frame' + str(i)]:continue# 獲取顏色幀內(nèi)參locals()['color_profile' + str(i)] = locals()['color_frame' + str(i)].get_profile()locals()['cvsprofile' + str(i)] = rs.video_stream_profile(locals()['color_profile' + str(i)])locals()['color_intrin' + str(i)] = locals()['cvsprofile' + str(i)].get_intrinsics()locals()['color_intrin_part' + str(i)] = [locals()['color_intrin' + str(i)].ppx,locals()['color_intrin' + str(i)].ppy,locals()['color_intrin' + str(i)].fx,locals()['color_intrin' + str(i)].fy]locals()['color_image' + str(i)] = np.asanyarray(locals()['color_frame' + str(i)].get_data())# D·C 191121:顯示幀看看# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)# cv2.imshow('RealSense', color_frame)# cv2.waitKey(1)locals()['bboxes_pr' + str(i)] = self.predict(locals()['color_image' + str(i)])locals()['image' + str(i)] = utils.draw_bbox(locals()['color_image' + str(i)],locals()['bboxes_pr' + str(i)],locals()['aligned_depth_frame' + str(i)],locals()['color_intrin_part' + str(i)],show_label=self.show_label)# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)cv2.imshow('window{}'.format(i), locals()['image' + str(i)])cv2.waitKey(1)finally:locals()['pipeline' + str(i)].stop()if __name__ == '__main__':YoloTest().dontla_evaluate_detect()

20191202-1

增加了攝像頭初始化機(jī)制(hardware_reset)、設(shè)備檢測(cè)機(jī)制、程序終止機(jī)制等。

# -*- coding: utf-8 -*- """ @File : test-multicam_multithreading.py @Time : 2019/11/30 15:18 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """import cv2 import numpy as np import tensorflow as tf import core.utils as utils from core.config import cfg from core.yolov3 import YOLOV3 import pyrealsense2 as rs import time import sysclass YoloTest(object):def __init__(self):# D·C 191111:__C.TEST.INPUT_SIZE = 544self.input_size = cfg.TEST.INPUT_SIZEself.anchor_per_scale = cfg.YOLO.ANCHOR_PER_SCALE# Dontla 191106注釋:初始化class.names文件的字典信息屬性self.classes = utils.read_class_names(cfg.YOLO.CLASSES)# D·C 191115:類數(shù)量屬性self.num_classes = len(self.classes)self.anchors = np.array(utils.get_anchors(cfg.YOLO.ANCHORS))# D·C 191111:__C.TEST.SCORE_THRESHOLD = 0.3self.score_threshold = cfg.TEST.SCORE_THRESHOLD# D·C 191120:__C.TEST.IOU_THRESHOLD = 0.45self.iou_threshold = cfg.TEST.IOU_THRESHOLDself.moving_ave_decay = cfg.YOLO.MOVING_AVE_DECAY# D·C 191120:__C.TEST.ANNOT_PATH = "./data/dataset/Dontla/20191023_Artificial_Flower/test.txt"self.annotation_path = cfg.TEST.ANNOT_PATH# D·C 191120:__C.TEST.WEIGHT_FILE = "./checkpoint/f_g_c_weights_files/yolov3_test_loss=15.8845.ckpt-47"self.weight_file = cfg.TEST.WEIGHT_FILE# D·C 191115:可寫(xiě)標(biāo)記(bool類型值)self.write_image = cfg.TEST.WRITE_IMAGE# D·C 191115:__C.TEST.WRITE_IMAGE_PATH = "./data/detection/"(識(shí)別圖片畫(huà)框并標(biāo)注文本后寫(xiě)入的圖片路徑)self.write_image_path = cfg.TEST.WRITE_IMAGE_PATH# D·C 191116:TEST.SHOW_LABEL設(shè)置為T(mén)rueself.show_label = cfg.TEST.SHOW_LABEL# D·C 191120:創(chuàng)建命名空間“input”with tf.name_scope('input'):# D·C 191120:建立變量(創(chuàng)建占位符開(kāi)辟內(nèi)存空間)self.input_data = tf.placeholder(dtype=tf.float32, name='input_data')self.trainable = tf.placeholder(dtype=tf.bool, name='trainable')model = YOLOV3(self.input_data, self.trainable)self.pred_sbbox, self.pred_mbbox, self.pred_lbbox = model.pred_sbbox, model.pred_mbbox, model.pred_lbbox# D·C 191120:創(chuàng)建命名空間“指數(shù)滑動(dòng)平均”with tf.name_scope('ema'):ema_obj = tf.train.ExponentialMovingAverage(self.moving_ave_decay)# D·C 191120:在允許軟設(shè)備放置的會(huì)話中啟動(dòng)圖形并記錄放置決策。(不懂啥意思。。。)allow_soft_placement=True表示允許tf自動(dòng)選擇可用的GPU和CPUself.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))# D·C 191120:variables_to_restore()用于加載模型計(jì)算滑動(dòng)平均值時(shí)將影子變量直接映射到變量本身self.saver = tf.train.Saver(ema_obj.variables_to_restore())# D·C 191120:用于下次訓(xùn)練時(shí)恢復(fù)模型self.saver.restore(self.sess, self.weight_file)def predict(self, image):# D·C 191107:復(fù)制一份圖片的鏡像,避免對(duì)圖片直接操作改變圖片的內(nèi)在屬性org_image = np.copy(image)# D·C 191107:獲取圖片尺寸org_h, org_w, _ = org_image.shape# D·C 191108:該函數(shù)將源圖結(jié)合input_size,將其轉(zhuǎn)換成預(yù)投喂的方形圖像(作者默認(rèn)544×544,中間為縮小尺寸的源圖,上下空區(qū)域?yàn)榛覉D):image_data = utils.image_preprocess(image, [self.input_size, self.input_size])# D·C 191108:打印維度看看:# print(image_data.shape)# (544, 544, 3)# D·C 191108:創(chuàng)建新軸,不懂要?jiǎng)?chuàng)建新軸干嘛?image_data = image_data[np.newaxis, ...]# D·C 191108:打印維度看看:# print(image_data.shape)# (1, 544, 544, 3)# D·C 191110:三個(gè)box可能存放了預(yù)測(cè)框圖(可能是N多的框,有用的沒(méi)用的重疊的都在里面)的信息(但是打印出來(lái)的值完全看不懂啊喂?)pred_sbbox, pred_mbbox, pred_lbbox = self.sess.run([self.pred_sbbox, self.pred_mbbox, self.pred_lbbox],feed_dict={self.input_data: image_data,self.trainable: False})# D·C 191110:打印三個(gè)box的類型、形狀和值看看:# print(type(pred_sbbox))# print(type(pred_mbbox))# print(type(pred_lbbox))# 都是<class 'numpy.ndarray'># print(pred_sbbox.shape)# print(pred_mbbox.shape)# print(pred_lbbox.shape)# (1, 68, 68, 3, 6)# (1, 34, 34, 3, 6)# (1, 17, 17, 3, 6)# print(pred_sbbox)# print(pred_mbbox)# print(pred_lbbox)# D·C 191110:(-1,6)表示不知道有多少行,反正你給我整成6列,然后concatenate又把它們仨給疊起來(lái),最終得到無(wú)數(shù)個(gè)6列數(shù)組(后面self.num_classes)個(gè)數(shù)存放的貌似是這個(gè)框?qū)儆陬惖母怕?#xff09;pred_bbox = np.concatenate([np.reshape(pred_sbbox, (-1, 5 + self.num_classes)),np.reshape(pred_mbbox, (-1, 5 + self.num_classes)),np.reshape(pred_lbbox, (-1, 5 + self.num_classes))], axis=0)# D·C 191111:打印pred_bbox和它的維度看看:# print(pred_bbox)# print(pred_bbox.shape)# (18207, 6)# D·C 191111:猜測(cè)是第一道過(guò)濾,過(guò)濾掉score_threshold以下的圖片,過(guò)濾完之后少了好多:# D·C 191115:bboxes維度為[n,6],前四列是坐標(biāo),第五列是得分,第六列是對(duì)應(yīng)類下標(biāo)bboxes = utils.postprocess_boxes(pred_bbox, (org_h, org_w), self.input_size, self.score_threshold)# D·C 191111:猜測(cè)是第二道過(guò)濾,過(guò)濾掉iou_threshold以下的圖片:bboxes = utils.nms(bboxes, self.iou_threshold)return bboxesdef dontla_evaluate_detect(self):ctx = rs.context()# devices = ctx.query_devices()# 攝像頭個(gè)數(shù)cam_num = 6# 循環(huán)reset攝像頭# hardware_reset()后是不是應(yīng)該延遲一段時(shí)間?不延遲就會(huì)報(bào)錯(cuò)for dev in ctx.query_devices():dev.hardware_reset()while len(ctx.query_devices()) != cam_num:time.sleep(0.5)print('攝像頭{}初始化成功'.format(dev.get_info(rs.camera_info.serial_number)))# D·C 191202:猜測(cè)攝像頭重置后的若干秒內(nèi),攝像頭是不穩(wěn)定的,這跟訪問(wèn)ctx.query_devices()時(shí)設(shè)備丟失是否是同一回事,有待考證!# D·C 191202:除此之外,我還懷疑,訪問(wèn)ctx.query_devices()會(huì)對(duì)設(shè)備連接造成影響,在這里,我們盡量減少訪問(wèn)頻率。如果沒(méi)有影響,那還是要等設(shè)備穩(wěn)定再運(yùn)行。# 設(shè)置睡眠延時(shí)倒計(jì)時(shí),防止重置失敗# sleep_time = 0# for i in range(sleep_time):# print('倒計(jì)時(shí){}'.format(sleep_time - i))# time.sleep(1)# 循環(huán)驗(yàn)證攝像頭個(gè)數(shù)是否為6,如果是則繼續(xù)向下,并獲取實(shí)際連接的攝像頭個(gè)數(shù)。否則持續(xù)驗(yàn)證(驗(yàn)證次數(shù)超過(guò)限制則退出程序)。devices = ctx.query_devices()connected_cam_num = len(devices)veri_times = 10while connected_cam_num != cam_num:veri_times -= 1if veri_times == -1:sys.exit()devices = ctx.query_devices()connected_cam_num = len(devices)print('攝像頭個(gè)數(shù):{}'.format(connected_cam_num))# 打印攝像頭序列號(hào)和接口號(hào)并創(chuàng)建需要顯示在窗口上的備注信息字符串列表(窗口名)cam_id = 0serial_list = []for i in devices:cam_id += 1serial_list.append('camera{}; serials number {}; usb port {}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))print('serial number {}:{};usb port:{}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))# 配置各個(gè)攝像頭的基本對(duì)象for i in range(connected_cam_num):locals()['pipeline' + str(i)] = rs.pipeline()locals()['config' + str(i)] = rs.config()locals()['serial' + str(i)] = ctx.devices[i].get_info(rs.camera_info.serial_number)locals()['config' + str(i)].enable_device(locals()['serial' + str(i)])locals()['config' + str(i)].enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)locals()['config' + str(i)].enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)locals()['pipeline' + str(i)].start(locals()['config' + str(i)])# 創(chuàng)建對(duì)齊對(duì)象(深度對(duì)齊顏色)locals()['align' + str(i)] = rs.align(rs.stream.color)# 運(yùn)行流并進(jìn)行識(shí)別try:# 設(shè)置break標(biāo)志,方便按下按鈕跳出循環(huán)退出窗口break2 = Falsewhile True:for i in range(connected_cam_num):locals()['frames' + str(i)] = locals()['pipeline' + str(i)].wait_for_frames()# 獲取對(duì)齊幀集locals()['aligned_frames' + str(i)] = locals()['align' + str(i)].process(locals()['frames' + str(i)])# 獲取對(duì)齊后的深度幀和彩色幀locals()['aligned_depth_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_depth_frame()locals()['color_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_color_frame()if not locals()['aligned_depth_frame' + str(i)] or not locals()['color_frame' + str(i)]:continue# 獲取顏色幀內(nèi)參locals()['color_profile' + str(i)] = locals()['color_frame' + str(i)].get_profile()locals()['cvsprofile' + str(i)] = rs.video_stream_profile(locals()['color_profile' + str(i)])locals()['color_intrin' + str(i)] = locals()['cvsprofile' + str(i)].get_intrinsics()locals()['color_intrin_part' + str(i)] = [locals()['color_intrin' + str(i)].ppx,locals()['color_intrin' + str(i)].ppy,locals()['color_intrin' + str(i)].fx,locals()['color_intrin' + str(i)].fy]locals()['color_image' + str(i)] = np.asanyarray(locals()['color_frame' + str(i)].get_data())# D·C 191121:顯示幀看看# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)# cv2.imshow('RealSense', color_frame)# cv2.waitKey(1)locals()['bboxes_pr' + str(i)] = self.predict(locals()['color_image' + str(i)])locals()['image' + str(i)] = utils.draw_bbox(locals()['color_image' + str(i)],locals()['bboxes_pr' + str(i)],locals()['aligned_depth_frame' + str(i)],locals()['color_intrin_part' + str(i)],show_label=self.show_label)# cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)# cv2.imshow('window{}'.format(i), locals()['image' + str(i)])cv2.imshow('{}'.format(serial_list[i]), locals()['image' + str(i)])key = cv2.waitKey(1)# 如果按下ESC,則跳出循環(huán)if key == 27:# 貌似直接用return也行# returnbreak2 = Truebreakif break2 == True:breakfinally:# 大概覺(jué)得先關(guān)閉窗口再停止流比較靠譜# 銷毀所有窗口cv2.destroyAllWindows()print('已關(guān)閉所有窗口!')# 停止所有流locals()['pipeline' + str(i)].stop()print('正在停止所有流,請(qǐng)等待數(shù)秒至程序穩(wěn)定結(jié)束!')if __name__ == '__main__':YoloTest().dontla_evaluate_detect()print('程序已結(jié)束!')

20191202-2

增加了連續(xù)驗(yàn)證機(jī)制,優(yōu)化了部分結(jié)構(gòu),并且處理了一部分代碼

# -*- coding: utf-8 -*- """ @File : test-multicam_multithreading.py @Time : 2019/11/30 15:18 @Author : Dontla @Email : sxana@qq.com @Software: PyCharm """import cv2 import numpy as np import tensorflow as tf import core.utils as utils from core.config import cfg from core.yolov3 import YOLOV3 import pyrealsense2 as rs import time import sysclass YoloTest(object):def __init__(self):# D·C 191111:__C.TEST.INPUT_SIZE = 544self.input_size = cfg.TEST.INPUT_SIZEself.anchor_per_scale = cfg.YOLO.ANCHOR_PER_SCALE# Dontla 191106注釋:初始化class.names文件的字典信息屬性self.classes = utils.read_class_names(cfg.YOLO.CLASSES)# D·C 191115:類數(shù)量屬性self.num_classes = len(self.classes)self.anchors = np.array(utils.get_anchors(cfg.YOLO.ANCHORS))# D·C 191111:__C.TEST.SCORE_THRESHOLD = 0.3self.score_threshold = cfg.TEST.SCORE_THRESHOLD# D·C 191120:__C.TEST.IOU_THRESHOLD = 0.45self.iou_threshold = cfg.TEST.IOU_THRESHOLDself.moving_ave_decay = cfg.YOLO.MOVING_AVE_DECAY# D·C 191120:__C.TEST.ANNOT_PATH = "./data/dataset/Dontla/20191023_Artificial_Flower/test.txt"self.annotation_path = cfg.TEST.ANNOT_PATH# D·C 191120:__C.TEST.WEIGHT_FILE = "./checkpoint/f_g_c_weights_files/yolov3_test_loss=15.8845.ckpt-47"self.weight_file = cfg.TEST.WEIGHT_FILE# D·C 191115:可寫(xiě)標(biāo)記(bool類型值)self.write_image = cfg.TEST.WRITE_IMAGE# D·C 191115:__C.TEST.WRITE_IMAGE_PATH = "./data/detection/"(識(shí)別圖片畫(huà)框并標(biāo)注文本后寫(xiě)入的圖片路徑)self.write_image_path = cfg.TEST.WRITE_IMAGE_PATH# D·C 191116:TEST.SHOW_LABEL設(shè)置為T(mén)rueself.show_label = cfg.TEST.SHOW_LABEL# D·C 191120:創(chuàng)建命名空間“input”with tf.name_scope('input'):# D·C 191120:建立變量(創(chuàng)建占位符開(kāi)辟內(nèi)存空間)self.input_data = tf.placeholder(dtype=tf.float32, name='input_data')self.trainable = tf.placeholder(dtype=tf.bool, name='trainable')model = YOLOV3(self.input_data, self.trainable)self.pred_sbbox, self.pred_mbbox, self.pred_lbbox = model.pred_sbbox, model.pred_mbbox, model.pred_lbbox# D·C 191120:創(chuàng)建命名空間“指數(shù)滑動(dòng)平均”with tf.name_scope('ema'):ema_obj = tf.train.ExponentialMovingAverage(self.moving_ave_decay)# D·C 191120:在允許軟設(shè)備放置的會(huì)話中啟動(dòng)圖形并記錄放置決策。(不懂啥意思。。。)allow_soft_placement=True表示允許tf自動(dòng)選擇可用的GPU和CPUself.sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True))# D·C 191120:variables_to_restore()用于加載模型計(jì)算滑動(dòng)平均值時(shí)將影子變量直接映射到變量本身self.saver = tf.train.Saver(ema_obj.variables_to_restore())# D·C 191120:用于下次訓(xùn)練時(shí)恢復(fù)模型self.saver.restore(self.sess, self.weight_file)def predict(self, image):# D·C 191107:復(fù)制一份圖片的鏡像,避免對(duì)圖片直接操作改變圖片的內(nèi)在屬性org_image = np.copy(image)# D·C 191107:獲取圖片尺寸org_h, org_w, _ = org_image.shape# D·C 191108:該函數(shù)將源圖結(jié)合input_size,將其轉(zhuǎn)換成預(yù)投喂的方形圖像(作者默認(rèn)544×544,中間為縮小尺寸的源圖,上下空區(qū)域?yàn)榛覉D):image_data = utils.image_preprocess(image, [self.input_size, self.input_size])# D·C 191108:打印維度看看:# print(image_data.shape)# (544, 544, 3)# D·C 191108:創(chuàng)建新軸,不懂要?jiǎng)?chuàng)建新軸干嘛?image_data = image_data[np.newaxis, ...]# D·C 191108:打印維度看看:# print(image_data.shape)# (1, 544, 544, 3)# D·C 191110:三個(gè)box可能存放了預(yù)測(cè)框圖(可能是N多的框,有用的沒(méi)用的重疊的都在里面)的信息(但是打印出來(lái)的值完全看不懂啊喂?)pred_sbbox, pred_mbbox, pred_lbbox = self.sess.run([self.pred_sbbox, self.pred_mbbox, self.pred_lbbox],feed_dict={self.input_data: image_data,self.trainable: False})# D·C 191110:打印三個(gè)box的類型、形狀和值看看:# print(type(pred_sbbox))# print(type(pred_mbbox))# print(type(pred_lbbox))# 都是<class 'numpy.ndarray'># print(pred_sbbox.shape)# print(pred_mbbox.shape)# print(pred_lbbox.shape)# (1, 68, 68, 3, 6)# (1, 34, 34, 3, 6)# (1, 17, 17, 3, 6)# print(pred_sbbox)# print(pred_mbbox)# print(pred_lbbox)# D·C 191110:(-1,6)表示不知道有多少行,反正你給我整成6列,然后concatenate又把它們仨給疊起來(lái),最終得到無(wú)數(shù)個(gè)6列數(shù)組(后面self.num_classes)個(gè)數(shù)存放的貌似是這個(gè)框?qū)儆陬惖母怕?#xff09;pred_bbox = np.concatenate([np.reshape(pred_sbbox, (-1, 5 + self.num_classes)),np.reshape(pred_mbbox, (-1, 5 + self.num_classes)),np.reshape(pred_lbbox, (-1, 5 + self.num_classes))], axis=0)# D·C 191111:打印pred_bbox和它的維度看看:# print(pred_bbox)# print(pred_bbox.shape)# (18207, 6)# D·C 191111:猜測(cè)是第一道過(guò)濾,過(guò)濾掉score_threshold以下的圖片,過(guò)濾完之后少了好多:# D·C 191115:bboxes維度為[n,6],前四列是坐標(biāo),第五列是得分,第六列是對(duì)應(yīng)類下標(biāo)bboxes = utils.postprocess_boxes(pred_bbox, (org_h, org_w), self.input_size, self.score_threshold)# D·C 191111:猜測(cè)是第二道過(guò)濾,過(guò)濾掉iou_threshold以下的圖片:bboxes = utils.nms(bboxes, self.iou_threshold)return bboxesdef dontla_evaluate_detect(self):# 攝像頭個(gè)數(shù)(在這里設(shè)置所需使用攝像頭的總個(gè)數(shù))cam_num = 6ctx = rs.context()# 連續(xù)驗(yàn)證機(jī)制# D·C 1911202:創(chuàng)建最大驗(yàn)證次數(shù)max_veri_times;創(chuàng)建連續(xù)穩(wěn)定值continuous_stable_value,用于判斷設(shè)備重置后是否處于穩(wěn)定狀態(tài)max_veri_times = 100continuous_stable_value = 10print('\n', end='')print('開(kāi)始連續(xù)驗(yàn)證,連續(xù)驗(yàn)證穩(wěn)定值:{},最大驗(yàn)證次數(shù):{}:'.format(continuous_stable_value, max_veri_times))continuous_value = 0veri_times = 0while True:devices = ctx.query_devices()connected_cam_num = len(devices)if connected_cam_num == cam_num:continuous_value += 1if continuous_value == continuous_stable_value:breakelse:continuous_value = 0veri_times += 1if veri_times == max_veri_times:print("檢測(cè)超時(shí),請(qǐng)檢查攝像頭連接!")sys.exit()print('攝像頭個(gè)數(shù):{}'.format(connected_cam_num))# 循環(huán)reset攝像頭# hardware_reset()后是不是應(yīng)該延遲一段時(shí)間?不延遲就會(huì)報(bào)錯(cuò)print('\n', end='')print('開(kāi)始初始化攝像頭:')for dev in ctx.query_devices():dev.hardware_reset()while len(ctx.query_devices()) != cam_num:time.sleep(0.5)print('攝像頭{}初始化成功'.format(dev.get_info(rs.camera_info.serial_number)))# D·C 191202:猜測(cè)攝像頭重置后的若干秒內(nèi),攝像頭是不穩(wěn)定的,這跟訪問(wèn)ctx.query_devices()時(shí)設(shè)備丟失是否是同一回事,有待考證!# D·C 191202:除此之外,我還懷疑,訪問(wèn)ctx.query_devices()會(huì)對(duì)設(shè)備連接造成影響,在這里,我們盡量減少訪問(wèn)頻率。如果沒(méi)有影響,那還是要等設(shè)備穩(wěn)定再運(yùn)行。# 連續(xù)驗(yàn)證機(jī)制# D·C 1911202:創(chuàng)建最大驗(yàn)證次數(shù)max_veri_times;創(chuàng)建連續(xù)穩(wěn)定值continuous_stable_value,用于判斷設(shè)備重置后是否處于穩(wěn)定狀態(tài)print('\n', end='')print('開(kāi)始連續(xù)驗(yàn)證,連續(xù)驗(yàn)證穩(wěn)定值:{},最大驗(yàn)證次數(shù):{}:'.format(continuous_stable_value, max_veri_times))continuous_value = 0veri_times = 0while True:devices = ctx.query_devices()connected_cam_num = len(devices)if connected_cam_num == cam_num:continuous_value += 1if continuous_value == continuous_stable_value:breakelse:continuous_value = 0veri_times += 1if veri_times == max_veri_times:print("檢測(cè)超時(shí),請(qǐng)檢查攝像頭連接!")sys.exit()print('攝像頭個(gè)數(shù):{}'.format(connected_cam_num))# 打印攝像頭序列號(hào)和接口號(hào)并創(chuàng)建需要顯示在窗口上的備注信息字符串列表(窗口名)print('\n', end='')cam_id = 0serial_list = []for i in devices:cam_id += 1serial_list.append('camera{}; serials number {}; usb port {}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))print('serial number {}:{};usb port:{}'.format(cam_id, i.get_info(rs.camera_info.serial_number),i.get_info(rs.camera_info.usb_type_descriptor)))# 配置各個(gè)攝像頭的基本對(duì)象for i in range(connected_cam_num):# D·C 191203:括號(hào)里是否有必要加ctx,加了沒(méi)加好像沒(méi)多大區(qū)別,但不加它又會(huì)提示黃色locals()['pipeline' + str(i)] = rs.pipeline(ctx)locals()['config' + str(i)] = rs.config()locals()['serial' + str(i)] = ctx.devices[i].get_info(rs.camera_info.serial_number)locals()['config' + str(i)].enable_device(locals()['serial' + str(i)])locals()['config' + str(i)].enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)locals()['config' + str(i)].enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)locals()['pipeline' + str(i)].start(locals()['config' + str(i)])# 創(chuàng)建對(duì)齊對(duì)象(深度對(duì)齊顏色)locals()['align' + str(i)] = rs.align(rs.stream.color)# 運(yùn)行流并進(jìn)行識(shí)別print('\n', end='')print('開(kāi)始識(shí)別:')try:# 設(shè)置break標(biāo)志,方便按下按鈕跳出循環(huán)退出窗口break2 = Falsewhile True:for i in range(connected_cam_num):locals()['frames' + str(i)] = locals()['pipeline' + str(i)].wait_for_frames()# 獲取對(duì)齊幀集locals()['aligned_frames' + str(i)] = locals()['align' + str(i)].process(locals()['frames' + str(i)])# 獲取對(duì)齊后的深度幀和彩色幀locals()['aligned_depth_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_depth_frame()locals()['color_frame' + str(i)] = locals()['aligned_frames' + str(i)].get_color_frame()if not locals()['aligned_depth_frame' + str(i)] or not locals()['color_frame' + str(i)]:continue# 獲取顏色幀內(nèi)參locals()['color_profile' + str(i)] = locals()['color_frame' + str(i)].get_profile()locals()['cvsprofile' + str(i)] = rs.video_stream_profile(locals()['color_profile' + str(i)])locals()['color_intrin' + str(i)] = locals()['cvsprofile' + str(i)].get_intrinsics()locals()['color_intrin_part' + str(i)] = [locals()['color_intrin' + str(i)].ppx,locals()['color_intrin' + str(i)].ppy,locals()['color_intrin' + str(i)].fx,locals()['color_intrin' + str(i)].fy]locals()['color_image' + str(i)] = np.asanyarray(locals()['color_frame' + str(i)].get_data())locals()['bboxes_pr' + str(i)] = self.predict(locals()['color_image' + str(i)])locals()['image' + str(i)] = utils.draw_bbox(locals()['color_image' + str(i)],locals()['bboxes_pr' + str(i)],locals()['aligned_depth_frame' + str(i)],locals()['color_intrin_part' + str(i)],show_label=self.show_label)# D·C 191202:本想創(chuàng)建固定比例的大小可調(diào)的窗口,發(fā)現(xiàn)無(wú)法使用,opencv bug?# cv2.namedWindow('{}'.format(serial_list[i]),# flags=cv2.WINDOW_NORMAL | cv2.WINDOW_FREERATIO | cv2.WINDOW_GUI_EXPANDED)cv2.imshow('{}'.format(serial_list[i]), locals()['image' + str(i)])key = cv2.waitKey(1)# 如果按下ESC,則跳出循環(huán)if key == 27:# 貌似直接用return也行# returnbreak2 = Truebreakif break2:breakfinally:# 大概覺(jué)得先關(guān)閉窗口再停止流比較靠譜# 銷毀所有窗口cv2.destroyAllWindows()print('\n', end='')print('已關(guān)閉所有窗口!')# 停止所有流for i in range(connected_cam_num):locals()['pipeline' + str(i)].stop()print('正在停止所有流,請(qǐng)等待數(shù)秒至程序穩(wěn)定結(jié)束!')if __name__ == '__main__':YoloTest().dontla_evaluate_detect()print('程序已結(jié)束!')

總結(jié)

以上是生活随笔為你收集整理的yunyang tensorflow-yolov3 Intel Realsense D435 (并发)使用locals()函数批量配置摄像头运行识别程序并画框(代码记录)(代码示例)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。

主站蜘蛛池模板: 亚洲欧美日韩国产一区二区 | 亚洲一区二区乱码 | 国产激情在线 | 国产高清中文字幕 | 色婷亚洲| 欧美一级做a爰片免费视频 成人激情在线观看 | 国产精品久久久久久婷婷天堂 | 亚洲av高清一区二区三区 | 亚洲黑丝在线 | 免费的性爱视频 | 精品人妻无码一区二区三 | 国产成人a v| 中文字幕人妻色偷偷久久 | 1000部啪啪未满十八勿入 | 亚洲av片不卡无码久久 | 精东传媒在线观看 | 短篇山村男同肉耽h | 天天av天天| 中国a毛片 | 日韩高清久久 | 懂色av蜜臂av粉嫩av | 亚洲一级Av无码毛片久久精品 | 色婷婷久久综合中文久久蜜桃av | 欧美视频在线观看 | 动漫玉足吸乳羞免费网站玉足 | 日韩欧美二区 | 中文字幕日韩精品在线 | 福利视频不卡 | 日韩欧美在线第一页 | 综合人人| 国产精品成人va在线观看 | 男男全肉变态重口高h | 亚欧色视频 | 少妇紧身牛仔裤裤啪啪 | 国产精品香蕉在线观看 | 天堂婷婷| 午夜性剧场 | 国产国产精品 | 一级片在线视频 | 狠狠五月| 亚洲大尺度在线 | 国产成人精品三级麻豆 | 日韩特一级 | 一区二区亚洲 | 午夜精品网站 | 欧洲精品一区二区三区久久 | 麻豆成人在线观看 | 中文成人无字幕乱码精品区 | 日本久久一级片 | 人妖一区 | 国产性―交一乱―色―情人 | 亚洲乱码日产精品bd在线观看 | 国产一区二区网址 | 欧美日韩大片在线观看 | 欧美日韩一区二区三区69堂 | 久久视频免费 | 日本中文字幕在线看 | www精品一区二区三区 | 国产免费一区二区三区最新6 | 都市激情亚洲综合 | 中国大陆毛片 | 1000亚洲裸体人体 | av漫画在线观看 | 狠狠av| 日韩人妻无码一区二区三区99 | 日韩欧美中文字幕精品 | 日本黄色片在线播放 | 91午夜影院 | 香蕉av777xxx色综合一区 | 91秘密入口 | 91精品视频免费观看 | 精品久久久久久久久久久久久 | 精品人妻互换一区二区三区 | 久操这里只有精品 | 国产精选第一页 | 日本免费成人 | 午夜精品久久久 | 午夜黄色一级片 | 调教丰满的已婚少妇在线观看 | 青春草视频在线免费观看 | 日韩无砖 | 欧美精选一区二区 | 亚洲色图 校园春色 | 日韩久久成人 | 中文字幕视频在线观看 | 亚洲免费看av | 午夜免费片 | mm131丰满少妇人体欣赏图 | 中文字幕第80页 | 日韩激情在线观看 | 已满十八岁免费观看全集动漫 | 亚欧洲乱码视频 | 深爱激情久久 | 欧美日韩在线播放三区四区 | 色爽爽爽| 牛牛影视一区二区三区 | 好吊色视频一区二区 | 久久综合精品视频 | 海角国产乱辈乱精品视频 |