日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 运维知识 > Ubuntu >内容正文

Ubuntu

【ubuntu20.04上openvino安装及环境配置】

發(fā)布時間:2023/12/15 Ubuntu 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【ubuntu20.04上openvino安装及环境配置】 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

文章目錄

  • 一,安裝及配置
  • 二,測試
  • 三,OpenVINO?工具套件轉(zhuǎn)換
  • 四, OpenVINO?工具套件轉(zhuǎn)換
  • 五、使用OpenVINO?工具套件進(jìn)行推理部署


原文鏈接

一,安裝及配置

1.下載英特爾? Distribution of OpenVINO? toolkit package 安裝包

choice1:去官網(wǎng)下載

Download Intel? Distribution of OpenVINO? Toolkit

版本選如下版本

2.解壓安裝包(以下皆以l_openvino_toolkit_p_2021.4.752為例)

tar -xvzf l_openvino_toolkit_p_2021.4.752.tgz

3.來到l_openvino_toolkit_p_2021.4.752目錄

cd l_openvino_toolkit_p_2021.4.752

4.使用圖形用戶界面 (GUI) 安裝向?qū)?/p> sudo ./install_GUI.sh

( 如果你已經(jīng)有opencv,那么在勾選安裝產(chǎn)品時可以選擇不安裝opencv,會造成高版本的opencv與低版本的opencv沖突)

5.安裝外部依賴:

cd /opt/intel/openvino_2021/install_dependencies sudo -E ./install_openvino_dependencies.sh

6.配置環(huán)境

gedit ~/.bashrc

將下述命令行添加至最后一行

source /opt/intel/openvino_2021/bin/setupvars.sh

7.驗(yàn)證 新打開一個終端,看見[setupvars.sh] OpenVINO environment initialized.證明成功。

cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites sudo ./install_prerequisites_onnx.sh

至此環(huán)境配置成功。

二,測試

第一個測試是使用caffe的squeezenet模型進(jìn)行預(yù)測,進(jìn)行該測試過程中需要聯(lián)網(wǎng)下載一些資源,注意,如果在上一步?jīng)]有安裝caffe框架相關(guān)的pyhton包,需要pip install安裝一下。
首先進(jìn)入demo目錄:

cd ~/intel/openvino/deployment_tools/demo

執(zhí)行第二腳本demo_security_barrier_camera.sh

./demo_security_barrier_camera.sh

運(yùn)行成功后會顯示以下結(jié)果:

三,OpenVINO?工具套件轉(zhuǎn)換

安裝好OpenVINO?工具套件后,我們需要使用OpenVINO?工具套件的模型優(yōu)化器(Model Optimizer)將ONNX文件轉(zhuǎn)換成IR(Intermediate Representation)文件。

首先設(shè)置 OpenVINO?工具套件的環(huán)境和變量:

source /opt/intel/openvino_2021/bin/setupvars.sh

然后運(yùn)行如下腳本,實(shí)現(xiàn)ONNX模型到IR文件(.xml和.bin)的轉(zhuǎn)換:

python3 /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model runs/exp5/weights/best.onnx --model_name yolov5s_best -s 255 --reverse_input_channels --output Conv_487,Conv_471,Conv_455

報錯(其實(shí)這里的格式也有問題)
![在這里插入圖片描述](https://img-blog.csdnimg.cn/7e8543ddf84f4405b4c23fda4881c540.png#pic_center

pip3 install networkx

OK,成功了!
注:如果你想部署在樹莓派上,要加上這個參數(shù)

--data_type FP16

四, OpenVINO?工具套件轉(zhuǎn)換

安裝好OpenVINO?工具套件后,我們需要使用OpenVINO?工具套件的模型優(yōu)化器(Model Optimizer)將ONNX文件轉(zhuǎn)換成IR(Intermediate Representation)文件。

首先設(shè)置 OpenVINO?工具套件的環(huán)境和變量:

source /opt/intel/openvino_2021/bin/setupvars.sh

然后運(yùn)行如下腳本,實(shí)現(xiàn)ONNX模型到IR文件(.xml和.bin)的轉(zhuǎn)換:

python /opt/intel/openvino_2021/deployment_tools/model_optimizer/mo.py --input_model runs/exp6/weights/best.onnx --model_name yolov5s_best -s 255 --reverse_input_channels --output Conv_487,Conv_471,Conv_455

關(guān)于命令行的參數(shù)用法,更多細(xì)節(jié)可參考:https://docs.openvinotoolkit.org/cn/latest/openvino_docs_MO_DG_prepare_model_convert_model_Converting_Model_General.html

轉(zhuǎn)換成功后,即可得到y(tǒng)olov5s_best.xml 和 yolov5s_best.bin文件。

五、使用OpenVINO?工具套件進(jìn)行推理部署

1 安裝Python版的OpenVINO?工具套件

這里使用Python進(jìn)行推理測試。因?yàn)槲疑厦娌捎胊pt的方式安裝OpenVINO?工具套件,這樣安裝后Python環(huán)境中并沒有OpenVINO?工具套件,所以我這里需要用pip安裝一下OpenVINO?工具套件。

注:如果你是編譯源碼等方式進(jìn)行安裝的,那么可以跳過這步:

pip install openvino

另外,安裝時要保持版本的一致性:

2 OpenVINO?工具套件實(shí)測

OpenVINO?工具套件官方提供了YOLOv3版本的Python推理demo,可以參考:

https://github.com/openvinotoolkit/open_model_zoo/blob/master/demos/object_detection_demo/python/object_detection_demo.py

我們這里參考這個已經(jīng)適配好的YOLOv5版本:https://github.com/violet17/yolov5_demo/blob/main/yolov5_demo.py,該源代碼的輸入數(shù)據(jù)是camera或者video,所以我們可以將test數(shù)據(jù)集中的圖像轉(zhuǎn)換成視頻(test.mp4)作為輸入,或者可以自行修改成圖像處理的代碼。

其中YOLOv5版本相對于官方Y(jié)OLOv3版本的主要修改點(diǎn):

  • 自定義letterbox函數(shù),預(yù)處理輸入圖像:
  • def letterbox(img, size=(640, 640), color=(114, 114, 114), auto=True, scaleFill=False, scaleup=True):# Resize image to a 32-pixel-multiple rectangle https://github.com/ultralytics/yolov3/issues/232shape = img.shape[:2] # current shape [height, width]w, h = size# Scale ratio (new / old)r = min(h / shape[0], w / shape[1])if not scaleup: # only scale down, do not scale up (for better test mAP)r = min(r, 1.0)# Compute paddingratio = r, r # width, height ratiosnew_unpad = int(round(shape[1] * r)), int(round(shape[0] * r))dw, dh = w - new_unpad[0], h - new_unpad[1] # wh paddingif auto: # minimum rectangledw, dh = np.mod(dw, 64), np.mod(dh, 64) # wh paddingelif scaleFill: # stretchdw, dh = 0.0, 0.0new_unpad = (w, h)ratio = w / shape[1], h / shape[0] # width, height ratiosdw /= 2 # divide padding into 2 sidesdh /= 2if shape[::-1] != new_unpad: # resizeimg = cv2.resize(img, new_unpad, interpolation=cv2.INTER_LINEAR)top, bottom = int(round(dh - 0.1)), int(round(dh + 0.1))left, right = int(round(dw - 0.1)), int(round(dw + 0.1))img = cv2.copyMakeBorder(img, top, bottom, left, right, cv2.BORDER_CONSTANT, value=color) # add bordertop2, bottom2, left2, right2 = 0, 0, 0, 0if img.shape[0] != h:top2 = (h - img.shape[0])//2bottom2 = top2img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color) # add borderelif img.shape[1] != w:left2 = (w - img.shape[1])//2right2 = left2img = cv2.copyMakeBorder(img, top2, bottom2, left2, right2, cv2.BORDER_CONSTANT, value=color) # add borderreturn img
  • 自定義parse_yolo_region函數(shù), 使用Sigmoid函數(shù)的YOLO Region層 :
  • def parse_yolo_region(blob, resized_image_shape, original_im_shape, params, threshold):# ------------------------------------------ Validating output parameters ------------------------------------------ out_blob_n, out_blob_c, out_blob_h, out_blob_w = blob.shapepredictions = 1.0/(1.0+np.exp(-blob)) assert out_blob_w == out_blob_h, "Invalid size of output blob. It sould be in NCHW layout and height should " \"be equal to width. Current height = {}, current width = {}" \"".format(out_blob_h, out_blob_w)# ------------------------------------------ Extracting layer parameters -------------------------------------------orig_im_h, orig_im_w = original_im_shaperesized_image_h, resized_image_w = resized_image_shapeobjects = list()side_square = params.side * params.side# ------------------------------------------- Parsing YOLO Region output -------------------------------------------bbox_size = int(out_blob_c/params.num) #4+1+num_classesfor row, col, n in np.ndindex(params.side, params.side, params.num):bbox = predictions[0, n*bbox_size:(n+1)*bbox_size, row, col]x, y, width, height, object_probability = bbox[:5]class_probabilities = bbox[5:]if object_probability < threshold:continuex = (2*x - 0.5 + col)*(resized_image_w/out_blob_w)y = (2*y - 0.5 + row)*(resized_image_h/out_blob_h)if int(resized_image_w/out_blob_w) == 8 & int(resized_image_h/out_blob_h) == 8: #80x80, idx = 0elif int(resized_image_w/out_blob_w) == 16 & int(resized_image_h/out_blob_h) == 16: #40x40idx = 1elif int(resized_image_w/out_blob_w) == 32 & int(resized_image_h/out_blob_h) == 32: # 20x20idx = 2width = (2*width)**2* params.anchors[idx * 6 + 2 * n]height = (2*height)**2 * params.anchors[idx * 6 + 2 * n + 1]class_id = np.argmax(class_probabilities)confidence = object_probabilityobjects.append(scale_bbox(x=x, y=y, height=height, width=width, class_id=class_id, confidence=confidence,im_h=orig_im_h, im_w=orig_im_w, resized_im_h=resized_image_h, resized_im_w=resized_image_w))return objects
  • 自定義scale_bbox函數(shù),進(jìn)行邊界框后處理 :
  • def scale_bbox(x, y, height, width, class_id, confidence, im_h, im_w, resized_im_h=640, resized_im_w=640):gain = min(resized_im_w / im_w, resized_im_h / im_h) # gain = old / newpad = (resized_im_w - im_w * gain) / 2, (resized_im_h - im_h * gain) / 2 # wh paddingx = int((x - pad[0])/gain)y = int((y - pad[1])/gain)w = int(width/gain)h = int(height/gain)xmin = max(0, int(x - w / 2))ymin = max(0, int(y - h / 2))xmax = min(im_w, int(xmin + w))ymax = min(im_h, int(ymin + h))# Method item() used here to convert NumPy types to native types for compatibility with functions, which don't# support Numpy types (e.g., cv2.rectangle doesn't support int64 in color parameter)return dict(xmin=xmin, xmax=xmax, ymin=ymin, ymax=ymax, class_id=class_id.item(), confidence=confidence.item())

    但在實(shí)際測試中,會出現(xiàn)這個問題 ‘openvino.inference_engine.ie_api.IENetwork’ object has no attribute ‘layers’ :

    [ INFO ] Creating Inference Engine… [ INFO ] Loading network files:
    yolov5/yolov5s_best.xml yolov5/yolov5s_best.bin yolov5_demo.py:233:
    DeprecationWarning: Reading network using constructor is deprecated.
    Please, use IECore.read_network() method instead net =
    IENetwork(model=model_xml, weights=model_bin) Traceback (most recent
    call last): File “yolov5_demo.py”, line 414, in
    sys.exit(main() or 0) File “yolov5_demo.py”, line 238, in main
    not_supported_layers = [l for l in net.layers.keys() if l not in
    supported_layers] AttributeError:
    ‘openvino.inference_engine.ie_api.IENetwork’ object has no attribute
    ‘layers’

    經(jīng)過我調(diào)研后才得知,在OpenVINO?工具套件2021.02及以后版本, ‘ie_api.IENetwork.layers’ 就被官方刪除了:

    所以需要將第327、328行的內(nèi)容:

    out_blob = out_blob.reshape(net.layers[layer_name].out_data[0].shape) layer_params = YoloParams(net.layers[layer_name].params, out_blob.shape[2])

    修改為:

    out_blob = out_blob.reshape(net.outputs[layer_name].shape)params = [x._get_attributes() for x in function.get_ordered_ops() if x.get_friendly_name() == layer_name][0]layer_params = YoloParams(params, out_blob.shape[2])

    并在第322行下面新添加一行代碼:

    function = ng.function_from_cnn(net)

    最終在終端,輸入下面命令:

    python yolov5_demo.py -m yolov5/yolov5s_best.xml test.mp4

    加上后處理,使用OpenVINO?工具套件的推理時間平均在220ms左右,測試平臺為英特爾? 酷睿? i5-7300HQ,而使用PyTorch CPU版本的推理時間平均在1.25s,可見OpenVINO?工具套件加速明顯!

    最終檢測結(jié)果如下:

    如果你想在CPU上實(shí)現(xiàn)模型的快速推理,可以試試OpenVINO?工具套件哦~

    總結(jié)

    以上是生活随笔為你收集整理的【ubuntu20.04上openvino安装及环境配置】的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。