日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

TVM darknet yolov3算子优化与量化代码的配置方法

發(fā)布時(shí)間:2023/11/28 生活经验 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 TVM darknet yolov3算子优化与量化代码的配置方法 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

TVM darknet yolov3算子優(yōu)化與量化代碼的配置方法

使用以下接口函數(shù)
? tvm.relay.optimize
? quantize.quantize
實(shí)際代碼:

convert nnvm to relay

print(“convert nnvm symbols into relay function…”)
#from nnvm.to_relay import to_relay
func, params = to_relay(sym, shape, ‘float32’, params=params)

optimization

print(“optimize relay graph…”)
with tvm.relay.build_config(opt_level=2):
func = tvm.relay.optimize(func, target, params)

quantize

print(“apply quantization…”)
from tvm.relay import quantize
with quantize.qconfig():
func = quantize.quantize(func, params)

參考鏈接:
https://github.com/makihiro/tvm_yolov3_sample/blob/master/yolov3_quantize_sample.py

完全代碼如下
早期版本,可以使用新的TVM版本修改。

Licensed to the Apache Software Foundation (ASF) under one

or more contributor license agreements. See the NOTICE file

distributed with this work for additional information

regarding copyright ownership. The ASF licenses this file

to you under the Apache License, Version 2.0 (the

“License”); you may not use this file except in compliance

with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing,

software distributed under the License is distributed on an

“AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY

KIND, either express or implied. See the License for the

specific language governing permissions and limitations

under the License.

“”"
Compile YOLO-V2 and YOLO-V3 in DarkNet Models

Author: Siju Samuel <https://siju-samuel.github.io/>_

This article is an introductory tutorial to deploy darknet models with TVM.
All the required models and libraries will be downloaded from the internet by the script.
This script runs the YOLO-V2 and YOLO-V3 Model with the bounding boxes
Darknet parsing have dependancy with CFFI and CV2 library
Please install CFFI and CV2 before executing this script

… code-block:: bash

pip install cffi
pip install opencv-python
“”"

numpy and matplotlib

import numpy as np
import matplotlib.pyplot as plt
import sys

tvm, relay

import tvm
from tvm import te
from tvm import relay
from ctypes import *
from tvm.contrib.download import download_testdata
from tvm.relay.testing.darknet import darknetffi
import tvm.relay.testing.yolo_detection
import tvm.relay.testing.darknet

######################################################################

Choose the model

-----------------------

Models are: ‘yolov2’, ‘yolov3’ or ‘yolov3-tiny’

Model name

MODEL_NAME = “yolov3”

######################################################################

Download required files

-----------------------

Download cfg and weights file if first time.

CFG_NAME = MODEL_NAME + “.cfg”
WEIGHTS_NAME = MODEL_NAME + “.weights”
REPO_URL = “https://github.com/dmlc/web-data/blob/main/darknet/”
CFG_URL = REPO_URL + “cfg/” + CFG_NAME + “?raw=true”
WEIGHTS_URL = “https://pjreddie.com/media/files/” + WEIGHTS_NAME

cfg_path = download_testdata(CFG_URL, CFG_NAME, module=“darknet”)
weights_path = download_testdata(WEIGHTS_URL, WEIGHTS_NAME, module=“darknet”)

Download and Load darknet library

if sys.platform in [“l(fā)inux”, “l(fā)inux2”]:
DARKNET_LIB = “l(fā)ibdarknet2.0.so”
DARKNET_URL = REPO_URL + “l(fā)ib/” + DARKNET_LIB + “?raw=true”
elif sys.platform == “darwin”:
DARKNET_LIB = “l(fā)ibdarknet_mac2.0.so”
DARKNET_URL = REPO_URL + “l(fā)ib_osx/” + DARKNET_LIB + “?raw=true”
else:
err = “Darknet lib is not supported on {} platform”.format(sys.platform)
raise NotImplementedError(err)

lib_path = download_testdata(DARKNET_URL, DARKNET_LIB, module=“darknet”)

DARKNET_LIB = darknetffi.dlopen(lib_path)
net = DARKNET_LIB.load_network(cfg_path.encode(“utf-8”), weights_path.encode(“utf-8”), 0)
dtype = “float32”
batch_size = 1

data = np.empty([batch_size, net.c, net.h, net.w], dtype)
shape_dict = {“data”: data.shape}
print(“Converting darknet to relay functions…”)
mod, params = relay.frontend.from_darknet(net, dtype=dtype, shape=data.shape)
######################################################################

Compile the model on NNVM

-------------------------

compile the model

local = True

if local:
target = ‘llvm’
ctx = tvm.cpu(0)
else:
target = ‘cuda’
ctx = tvm.gpu(0)

data = np.empty([batch_size, net.c, net.h, net.w], dtype)
shape = {‘data’: data.shape}

dtype_dict = {}

convert nnvm to relay

print(“convert nnvm symbols into relay function…”)
#from nnvm.to_relay import to_relay
func, params = to_relay(sym, shape, ‘float32’, params=params)

optimization

print(“optimize relay graph…”)
with tvm.relay.build_config(opt_level=2):
func = tvm.relay.optimize(func, target, params)

quantize

print(“apply quantization…”)
from tvm.relay import quantize
with quantize.qconfig():
func = quantize.quantize(func, params)

Relay build

print(“Compiling the model…”)
print(func.astext(show_meta_data=False))
with tvm.relay.build_config(opt_level=3):
graph, lib, params = tvm.relay.build(func, target=target, params=params)

Save the model

tmp = util.tempdir()
lib_fname = tmp.relpath(‘model.tar’)
lib.export_library(lib_fname)

NNVM

with nnvm.compiler.build_config(opt_level=2):

graph, lib, params = nnvm.compiler.build(sym, target, shape, dtype_dict, params)

#[neth, netw] = shape[‘data’][2:] # Current image shape is 608x608
######################################################################

######################################################################

Import the graph to Relay

-------------------------

compile the model

target = tvm.target.Target(“l(fā)lvm”, host=“l(fā)lvm”)
dev = tvm.cpu(0)
data = np.empty([batch_size, net.c, net.h, net.w], dtype)
shape = {“data”: data.shape}
print(“Compiling the model…”)
with tvm.transform.PassContext(opt_level=3):
lib = relay.build(mod, target=target, params=params)

[neth, netw] = shape[“data”][2:] # Current image shape is 608x608
######################################################################

Load a test image

-----------------

test_image = “dog.jpg”
print(“Loading the test image…”)
img_url = REPO_URL + “data/” + test_image + “?raw=true”
img_path = download_testdata(img_url, test_image, “data”)

data = tvm.relay.testing.darknet.load_image(img_path, netw, neth)
######################################################################

Execute on TVM Runtime

----------------------

The process is no different from other examples.

from tvm.contrib import graph_executor

m = graph_executor.GraphModule(lib"default")

set inputs

m.set_input(“data”, tvm.nd.array(data.astype(dtype)))

execute

print(“Running the test image…”)

detection

thresholds

thresh = 0.5
nms_thresh = 0.45

m.run()

get outputs

tvm_out = []
if MODEL_NAME == “yolov2”:
layer_out = {}
layer_out[“type”] = “Region”
# Get the region layer attributes (n, out_c, out_h, out_w, classes, coords, background)
layer_attr = m.get_output(2).numpy()
layer_out[“biases”] = m.get_output(1).numpy()
out_shape = (layer_attr[0], layer_attr[1] // layer_attr[0], layer_attr[2], layer_attr[3])
layer_out[“output”] = m.get_output(0).numpy().reshape(out_shape)
layer_out[“classes”] = layer_attr[4]
layer_out[“coords”] = layer_attr[5]
layer_out[“background”] = layer_attr[6]
tvm_out.append(layer_out)

elif MODEL_NAME == “yolov3”:
for i in range(3):
layer_out = {}
layer_out[“type”] = “Yolo”
# Get the yolo layer attributes (n, out_c, out_h, out_w, classes, total)
layer_attr = m.get_output(i * 4 + 3).numpy()
layer_out[“biases”] = m.get_output(i * 4 + 2).numpy()
layer_out[“mask”] = m.get_output(i * 4 + 1).numpy()
out_shape = (layer_attr[0], layer_attr[1] // layer_attr[0], layer_attr[2], layer_attr[3])
layer_out[“output”] = m.get_output(i * 4).numpy().reshape(out_shape)
layer_out[“classes”] = layer_attr[4]
tvm_out.append(layer_out)

elif MODEL_NAME == “yolov3-tiny”:
for i in range(2):
layer_out = {}
layer_out[“type”] = “Yolo”
# Get the yolo layer attributes (n, out_c, out_h, out_w, classes, total)
layer_attr = m.get_output(i * 4 + 3).numpy()
layer_out[“biases”] = m.get_output(i * 4 + 2).numpy()
layer_out[“mask”] = m.get_output(i * 4 + 1).numpy()
out_shape = (layer_attr[0], layer_attr[1] // layer_attr[0], layer_attr[2], layer_attr[3])
layer_out[“output”] = m.get_output(i * 4).numpy().reshape(out_shape)
layer_out[“classes”] = layer_attr[4]
tvm_out.append(layer_out)
thresh = 0.560

do the detection and bring up the bounding boxes

img = tvm.relay.testing.darknet.load_image_color(img_path)
_, im_h, im_w = img.shape
dets = tvm.relay.testing.yolo_detection.fill_network_boxes(
(netw, neth), (im_w, im_h), thresh, 1, tvm_out
)
last_layer = net.layers[net.n - 1]
tvm.relay.testing.yolo_detection.do_nms_sort(dets, last_layer.classes, nms_thresh)

coco_name = “coco.names”
coco_url = REPO_URL + “data/” + coco_name + “?raw=true”
font_name = “arial.ttf”
font_url = REPO_URL + “data/” + font_name + “?raw=true”
coco_path = download_testdata(coco_url, coco_name, module=“data”)
font_path = download_testdata(font_url, font_name, module=“data”)

with open(coco_path) as f:
content = f.readlines()

names = [x.strip() for x in content]

tvm.relay.testing.yolo_detection.show_detections(img, dets, thresh, names, last_layer.classes)
tvm.relay.testing.yolo_detection.draw_detections(
font_path, img, dets, thresh, names, last_layer.classes
)
plt.imshow(img.transpose(1, 2, 0))
plt.show()

參考鏈接:
https://github.com/makihiro/tvm_yolov3_sample/blob/master/yolov3_quantize_sample.py
https://tvm.apache.org/docs/tutorials/frontend/from_darknet.html#sphx-glr-tutorials-frontend-from-darknet-py

總結(jié)

以上是生活随笔為你收集整理的TVM darknet yolov3算子优化与量化代码的配置方法的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。