tensorflow intel platform 优化
intel平臺優(yōu)化
TensorFlow *是深度學習領(lǐng)域中主要使用的機器學習框架,要求高效利用計算資源。 為了充分利用英特爾架構(gòu)和提高性能,TensorFlow *庫已經(jīng)使用英特爾MKL-DNN原語進行了優(yōu)化,該原語是深度學習應(yīng)用的流行性能庫。
已進行優(yōu)化的平臺
?
有三種安裝方式。
1. 使用pip??
?pip install -i https://pypi.anaconda.org/intel/simple tensorflow
2. anaconda 安裝
video
3.? 自己編譯
前兩種方式可能不支持最新的指令集。
首先安裝?dnf?Bazel?
安裝 Bazel
pushd /var/tmpURL=https://github.com/bazelbuild/bazel/releases/latest LASTURL=$(curl $URL -s -L -I -o /dev/null -w '%{url_effective}') BZ_VERSION=${LASTURL##*/} wget https://github.com/bazelbuild/bazel/releases/download/$BZ_VERSION/bazel-$BZ_VERSION-installer-linux-x86_64.shchmod +x bazel-* ./bazel-* export PATH=/usr/local/bin:$PATHpopd
?
centos 7.4 can not install `dnf`from epel
WARNING: EPEL 7 DNF is very old and has issues to include security flaws. This appears to be the reason it was removed. That said here is the work around to get it working on Centos 7.
cat > /etc/yum.repos.d/dnf-stack-el7.repo << EOF [dnf-stack-el7] name=Copr repo for dnf-stack-el7 owned by @rpm-software-management baseurl=https://copr-be.cloud.fedoraproject.org/results/@rpm-software-management/dnf-stack-el7/epel-7-\$basearch/ skip_if_unavailable=True gpgcheck=1 gpgkey=https://copr-be.cloud.fedoraproject.org/results/@rpm-software-management/dnf-stack-el7/pubkey.gpg enabled=1 enabled_metadata=1 EOFyum install dnf
?
?centos 7會出現(xiàn)這個bug:
dnf copr plugin not present in dnf-plugins-core
因為EPEL 7 DNF 已經(jīng)被移除了centos 7 install dn,還需要:
wget http://springdale.math.ias.edu/data/puias/unsupported/7/x86_64/dnf-plugins-core-0.1.5-3.sdl7.noarch.rpm
dnf install copr-cli
sudo dnf update
dnf copr enable vbatts/bazel
centos?可以直接安裝bazel下:
wget https://copr.fedorainfracloud.org/coprs/vbatts/bazel/repo/epel-7/vbatts-bazel-epel-7.repo -P /etc/yum.repos.d/ yum install dnf-plugins-core-0.1.5-3.sdl7.noarch.rpm yum install bazel
?install tf:
git clone https://github.com/tensorflow/tensorflow tensorflow cd tensorflow
?Compiling TensorFlow with Intel C Compiler
CC=icc bazel build --verbose_failures --config=mkl --copt=-msse4.2 --copt="-DEIGEN_USE_VML" -c opt //tensorflow/tools/pip_package:build_pip_package
bazel build --config=mkl -c opt --copt=-mavx --copt=-mavx2 --copt=-mfma --copt=-mavx512f --copt=-mavx512dq --copt=-mavx512cd --copt=-mavx512bw --copt=-mavx512vl --copt="-DEIGEN_USE_VML" //tensorflow/tools/pip_package:build_pip_package
?Build and Install TensorFlow* on Intel? Architecture
?
build tensorflow container:
more @ github
ref?build-dev-container.sh ?@github tensorflow docker
# source?tf-docker.evn
# cat?tf-docker.evn
?
# The script set the following environment variables for tf docker: export TF_DOCKER_BUILD_TYPE=mkl # export TF_DOCKER_BUILD_TYPE=CPU # CPU or GPU imageexport TF_DOCKER_BUILD_IS_DEVEL=YES # Is this developer imageexport TF_DOCKER_BUILD_DEVEL_BRANCH=r1.99 # export TF_DOCKER_BUILD_DEVEL_BRANCH=master # (Required if TF_DOCKER_BUILD_IS_DEVEL is YES) # Specifies the branch to checkout for devel docker images# export TF_DOCKER_BUILD_CENTRAL_PIP # (Optional) # If set to a non-empty string, will use it as the URL from which the # pip wheel file will be downloaded (instead of building the pip locally).# export TF_DOCKER_BUILD_CENTRAL_PIP_IS_LOCAL # (Optional) # If set to a non-empty string, we will treat TF_DOCKER_BUILD_CENTRAL_PIP # as a path rather than a url.export TF_DOCKER_BUILD_IMAGE_NAME=native-mkl-tf # (Optional) # If set to any non-empty value, will use it as the image of the # newly-built image. If not set, the tag prefix tensorflow/tensorflow # will be used.# export TF_DOCKER_BUILD_VERSION: # (Optinal) # If set to any non-empty value, will use the version (e.g., 0.8.0) as the # tag prefix of the image. Additional strings, e.g., "-devel-gpu", will be # appended to the tag. If not set, the default tag prefix "latest" will be # used.# export TF_DOCKER_BUILD_PORT # (Optional) # If set to any non-empty and valid port number, will use that port number # during basic checks on the newly-built docker image.# export TF_DOCKER_BUILD_PUSH_CMD # (Optional) # If set to a valid binary/script path, will call the script with the final # tagged image name with an argument, to push the image to a central repo # such as gcr.io or Docker Hub.# export TF_DOCKER_BUILD_PUSH_WITH_CREDENTIALS # (Optional) # Do not set this along with TF_DOCKER_BUILD_PUSH_CMD. We will push with the # direct commands as opposed to a script.# export TF_DOCKER_USERNAME # (Optional) # Dockerhub username for pushing a package.# export TF_DOCKER_EMAIL # (Optional) # Dockerhub email for pushing a package.# export TF_DOCKER_PASSWORD # (Optional) # Dockerhub password for pushing a package.# export TF_DOCKER_BUILD_PYTHON_VERSION # (Optional) # Specifies the desired Python version. Defaults to PYTHON2.# export TF_DOCKER_BUILD_OPTIONS # (Optional) # Specifies the desired build options. Defaults to OPT.View Code
?
?
?
參考:
MPI教程
tensorflow MPI
build tensorflow
build
install 中文版
?
pip install mock
REF:
學習課程
more info
conda install for TensorFlow and Intel Distribution for Python upgrade from 2017 to 2018
DNF (Dandified Yum)
?Intel? Computer Vision(CV) SDK?
Intel's Deep Learning Inference Engine Developer Guide
inference-engine-devguide-introduction
Configuring Model Optimizer for TensorFlow* Prerequisites
Configuring Caffe*
Converting Your TensorFlow* Model
Configuring Model Optimizer for TensorFlow* Prerequisites
What is Intel? DAAL?
?
應(yīng)用相關(guān)的論文
Pedestrian Detection Using TensorFlow* on Intel? Architecture
tensorflow 監(jiān)測交通燈
CIFAR-10 分類-tensorflow
構(gòu)建安裝TensorFlow* Serving on Intel? Architecture
Train and Use a TensorFlow* Model on Intel? Architecture
Using the Model Optimizer to Convert TensorFlow* Models
視頻?Performance Optimization of Deep Learning Frameworks Caffe* and TensorFlow* for the Intel? Xeon Phi? Product Family
單節(jié)點完整使用教程
轉(zhuǎn)載于:https://www.cnblogs.com/shaohef/p/8968283.html
總結(jié)
以上是生活随笔為你收集整理的tensorflow intel platform 优化的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: java基础--日期--练习集锦
- 下一篇: 图书管理系统(源码)