日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

install onnx_tensort

發布時間:2025/4/5 编程问答 24 豆豆
生活随笔 收集整理的這篇文章主要介紹了 install onnx_tensort 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

添加鏈接描述

install onnx_tensort

這是一個將onnx 轉換成tensort的小工具
和tensorrt 的內置工具類似tensorrt轉onnx為engine的用法
你需要下載onnx 源碼放在3dparty 下
你還需要下載tensorrt8

這個原來項目代碼在這里

我在網盤里面 放了整個項目和數據

鏈接: https://pan.baidu.com/s/1pHs5Qdeqmz4ppGDHPSTnOQ?pwd=7djr
提取碼: 7djr

cd onnx-tensorrtmkdir build && cd buildcmake .. -DTENSORRT_ROOT=/home/oem/Downloads/onnx-tensorrt-master/TensorRT-8.2.1.8 && make -j8

復制onnx文件到build文件夾 并且執行

onnx2trt my_model.onnx -t my_model.onnx.txt

TensorRT Backend For ONNX

Parses ONNX models for execution with TensorRT.

See also the TensorRT documentation.

For the list of recent changes, see the changelog.

For a list of commonly seen issues and questions, see the FAQ.

For business inquiries, please contact researchinquiries@nvidia.com

For press and other inquiries, please contact Hector Marinez at hmarinez@nvidia.com

Supported TensorRT Versions

Development on the Master branch is for the latest version of TensorRT 8.2.3.0 with full-dimensions and dynamic shape support.

For previous versions of TensorRT, refer to their respective branches.

Full Dimensions + Dynamic Shapes

Building INetwork objects in full dimensions mode with dynamic shape support requires calling the following API:

C++

const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH); builder->createNetworkV2(explicitBatch)

Python

import tensorrt explicit_batch = 1 << (int)(tensorrt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH) builder.create_network(explicit_batch)

For examples of usage of these APIs see:

  • sampleONNXMNIST
  • sampleDynamicReshape

Supported Operators

Current supported ONNX operators are found in the operator support matrix.

Installation

Dependencies

  • Protobuf >= 3.0.x
  • TensorRT 8.2.3.0
  • TensorRT 8.2.3.0 open source libaries (master branch)

Building

For building within docker, we recommend using and setting up the docker containers as instructed in the main (TensorRT repository)[https://github.com/NVIDIA/TensorRT#setting-up-the-build-environment] to build the onnx-tensorrt library.

Once you have cloned the repository, you can build the parser libraries and executables by running:

cd onnx-tensorrt mkdir build && cd build cmake .. -DTENSORRT_ROOT=/home/oem/Downloads/onnx-tensorrt-master/TensorRT-8.2.1.8 && make -j8 // Ensure that you update your LD_LIBRARY_PATH to pick up the location of the newly built library: export LD_LIBRARY_PATH=$PWD:$LD_LIBRARY_PATH

Note that this project has a dependency on CUDA. By default the build will look in /usr/local/cuda for the CUDA toolkit installation. If your CUDA path is different, overwrite the default path by providing -DCUDA_TOOLKIT_ROOT_DIR=<path_to_cuda_install> in the CMake command.

For building only the libraries, append -DBUILD_LIBRARY_ONLY=1 to the CMake build command.

Experimental Ops

All experimental operators will be considered unsupported by the ONNX-TRT’s supportsModel() function.

NonMaxSuppression is available as an experimental operator in TensorRT 8. It has the limitation that the output shape is always padded to length [max_output_boxes_per_class, 3], therefore some post processing is required to extract the valid indices.

Executable Usage

ONNX models can be converted to serialized TensorRT engines using the onnx2trt executable:

onnx2trt my_model.onnx -o my_engine.trt

ONNX models can also be converted to human-readable text:

onnx2trt my_model.onnx -t my_model.onnx.txt

ONNX models can also be optimized by ONNX’s optimization libraries (added by dsandler).
To optimize an ONNX model and output a new one use -m to specify the output model name and -O to specify a semicolon-separated list of optimization passes to apply:

onnx2trt my_model.onnx -O "pass_1;pass_2;pass_3" -m my_model_optimized.onnx

See more all available optimization passes by running:

onnx2trt -p

See more usage information by running:

onnx2trt -h

Python Modules

Python bindings for the ONNX-TensorRT parser are packaged in the shipped .whl files. Install them with

python3 -m pip install <tensorrt_install_dir>/python/tensorrt-8.x.x.x-cp<python_ver>-none-linux_x86_64.whl

TensorRT 8.2.1.8 supports ONNX release 1.8.0. Install it with:

python3 -m pip install onnx==1.8.0

The ONNX-TensorRT backend can be installed by running:

python3 setup.py install

ONNX-TensorRT Python Backend Usage

The TensorRT backend for ONNX can be used in Python as follows:

import onnx import onnx_tensorrt.backend as backend import numpy as npmodel = onnx.load("/path/to/model.onnx") engine = backend.prepare(model, device='CUDA:1') input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32) output_data = engine.run(input_data)[0] print(output_data) print(output_data.shape)

C++ Library Usage

The model parser library, libnvonnxparser.so, has its C++ API declared in this header:

NvOnnxParser.h

Tests

After installation (or inside the Docker container), ONNX backend tests can be run as follows:

Real model tests only:

python onnx_backend_test.py OnnxBackendRealModelTest

All tests:

python onnx_backend_test.py

You can use -v flag to make output more verbose.

Pre-trained Models

Pre-trained models in ONNX format can be found at the ONNX Model Zoo

總結

以上是生活随笔為你收集整理的install onnx_tensort的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。