日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > Caffe >内容正文

Caffe

Caffe部署中的几个train-test-solver-prototxt-deploy等说明二

發布時間:2025/3/21 Caffe 188 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Caffe部署中的几个train-test-solver-prototxt-deploy等说明二 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Caffe部署中的幾個train-test-solver-prototxt-deploy等說明<二>

發表于2016/9/15 20:39:52 ?1049人閱讀

分類:?神經網絡與深度學習

一,train_val.prototxt

name: "CIFAR10_quick" layer {name: "cifar"type: "Data"top: "data"top: "label"include {phase: TRAIN}transform_param {# mirror: true# mean_file: "examples/cifar10/mean.binaryproto"uumean_file: "myself/00b/00bmean.binaryproto" }data_param {# source: "examples/cifar10/cifar10_train_lmdb"source: "myself/00b/00b_train_lmdb"batch_size: 50backend: LMDB} } layer {name: "cifar"type: "Data"top: "data"top: "label"include {phase: TEST}transform_param {# mean_file: "examples/cifar10/mean.binaryproto"mean_file: "myself/00b/00bmean.binaryproto"}data_param {# source: "examples/cifar10/cifar10_test_lmdb"source: "myself/00b/00b_val_lmdb"batch_size: 50backend: LMDB} } layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 32# pad: 1kernel_size: 4stride: 1weight_filler {type: "gaussian"std: 0.0001}bias_filler {type: "constant"}} } layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: "relu1"type: "ReLU"bottom: "pool1"top: "pool1" } layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 32# pad: 2kernel_size: 4stride: 1weight_filler {type: "gaussian"std: 0.01}bias_filler {type: "constant"}} } layer {name: "relu2"type: "ReLU"bottom: "conv2"top: "conv2" } layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: AVEkernel_size: 2stride: 2} } layer {name: "conv3"type: "Convolution"bottom: "pool2"top: "conv3"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 32# pad: 2kernel_size: 4stride: 1weight_filler {type: "gaussian"std: 0.01}bias_filler {type: "constant"}} } layer {name: "relu3"type: "ReLU"bottom: "conv3"top: "conv3" } layer {name: "pool3"type: "Pooling"bottom: "conv3"top: "pool3"pooling_param {pool: AVEkernel_size: 2stride: 2} } layer {name: "conv4"type: "Convolution"bottom: "pool3"top: "conv4"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 32# pad: 2kernel_size: 4stride: 1weight_filler {type: "gaussian"std: 0.01}bias_filler {type: "constant"}} } layer {name: "relu4"type: "ReLU"bottom: "conv4"top: "conv4" } layer {name: "pool4"type: "Pooling"bottom: "conv4"top: "pool4"pooling_param {pool: AVEkernel_size: 2stride: 2} } layer {name: "ip1"type: "InnerProduct"bottom: "pool4"top: "ip1"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 200weight_filler {type: "gaussian"std: 0.1}bias_filler {type: "constant"}} } layer {name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 3weight_filler {type: "gaussian"std: 0.1}bias_filler {type: "constant"}} } layer {name: "accuracy"type: "Accuracy"bottom: "ip2"bottom: "label"top: "accuracy"include {phase: TEST} } layer {name: "loss"type: "SoftmaxWithLoss"bottom: "ip2"bottom: "label"top: "loss" }

二,solver.prototxt

# reduce the learning rate after 8 epochs (4000 iters) by a factor of 10# The train/test net protocol buffer definition net: "myself/00b/train_val.prototxt" # test_iter specifies how many forward passes the test should carry out. # In the case of MNIST, we have test batch size 100 and 100 test iterations, # covering the full 10,000 testing images. test_iter: 10 # Carry out testing every 500 training iterations. test_interval: 70 # The base learning rate, momentum and the weight decay of the network. base_lr: 0.001 momentum: 0.9 weight_decay: 0.004 # The learning rate policy lr_policy: "fixed" # lr_policy: "step" gamma: 0.1 stepsize: 100 # Display every 100 iterations display: 10 # The maximum number of iterations max_iter: 2000 # snapshot intermediate results # snapshot: 3000 # snapshot_format: HDF5snapshot_prefix: "myself/00b/00b" # solver mode: CPU or GPU solver_mode: CPU

三,deploy.prototxt

name: "CIFAR10_quick" layer {name: "data"type: "Input"top: "data"input_param { shape: { dim: 1 dim: 3 dim: 101 dim: 101 } } } layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"convolution_param {num_output: 32kernel_size: 4stride: 1} } layer {name: "relu1"type: "ReLU"bottom: "conv1"top: "conv1" } layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"convolution_param {num_output: 32kernel_size: 4stride: 1} } layer {name: "relu2"type: "ReLU"bottom: "conv2"top: "conv2" } layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: "conv3"type: "Convolution"bottom: "pool2"top: "conv3"convolution_param {num_output: 32kernel_size: 4stride: 1} } layer {name: "relu3"type: "ReLU"bottom: "conv3"top: "conv3" } layer {name: "pool3"type: "Pooling"bottom: "conv3"top: "pool3"pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: "conv4"type: "Convolution"bottom: "pool3"top: "conv4"convolution_param {num_output: 32kernel_size: 4stride: 1} } layer {name: "relu4"type: "ReLU"bottom: "conv4"top: "conv4" } layer {name: "pool4"type: "Pooling"bottom: "conv4"top: "pool4"pooling_param {pool: MAXkernel_size: 2stride: 2} } layer {name: "ip1"type: "InnerProduct"bottom: "pool4"top: "ip1"inner_product_param {num_output: 200} } layer {name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"inner_product_param {num_output: 3} } layer {#name: "loss"name: "prob"type: "Softmax" bottom: "ip2"top: "prob"#top: "loss" }

參考一:

模型就用程序自帶的caffenet模型,位置在?models/bvlc_reference_caffenet/文件夾下, 將需要的兩個配置文件,復制到myfile文件夾內

# sudo cp models/bvlc_reference_caffenet/solver.prototxt examples/myfile/ # sudo cp models/bvlc_reference_caffenet/train_val.prototxt examples/myfile/

修改train_val.protxt,只需要修改兩個階段的data層就可以了,其它可以不用管。

name: "CaffeNet" layer {name: "data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 227 mean_file: "examples/myfile/mean.binaryproto" } data_param { source: "examples/myfile/img_train_lmdb" batch_size: 256 backend: LMDB } } layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { mirror: false crop_size: 227 mean_file: "examples/myfile/mean.binaryproto" } data_param { source: "examples/myfile/img_test_lmdb" batch_size: 50 backend: LMDB } } ?

實際上就是修改兩個data layer的mean_file和source這兩個地方,其它都沒有變化 。

修改其中的solver.prototxt

# sudo vi examples/myfile/solver.prototxt net: "examples/myfile/train_val.prototxt" test_iter: 2 test_interval: 50 base_lr: 0.001 lr_policy: "step" gamma: 0.1 stepsize: 100 display: 20 max_iter: 500 momentum: 0.9 weight_decay: 0.005 solver_mode: GPU

100個測試數據,batch_size為50,因此test_iter設置為2,就能全cover了。在訓練過程中,調整學習率,逐步變小。

參考二:

前面做好了lmdb和均值文件,下面以Googlenet為例修改網絡并訓練模型。

?

我們將caffe-master\models下的bvlc_googlenet文件夾復制到caffe-master\examples\imagenet下。(因為我們的lmdb和均值都在這里,放一起方便些)

打開train_val.txt,修改:

1.修改data層:

?

  • layer?{??
  • ??name:?"data"??
  • ??type:?"Data"??
  • ??top:?"data"??
  • ??top:?"label"??
  • ??include?{??
  • ????phase:?TRAIN??
  • ??}??
  • ??transform_param?{??
  • ????mirror:?true??
  • ????crop_size:?224??
  • ????mean_file:?"examples/imagenet/mydata_mean.binaryproto"?#均值文件??
  • ????#mean_value:?104?#這些注釋掉??
  • ????#mean_value:?117??
  • ????#mean_value:?123??
  • ??}??
  • ??data_param?{??
  • ????source:?"examples/imagenet/mydata_train_lmdb"?#訓練集的lmdb??
  • ????batch_size:?32?#根據GPU修改??
  • ????backend:?LMDB??
  • ??}??
  • }??
  • ?
  • layer?{??
  • ??name:?"data"??
  • ??type:?"Data"??
  • ??top:?"data"??
  • ??top:?"label"??
  • ??include?{??
  • ????phase:?TEST??
  • ??}??
  • ??transform_param?{??
  • ????mirror:?false??
  • ????crop_size:?224??
  • ????mean_file:?"examples/imagenet/mydata_mean.binaryproto"?#均值文件??
  • ????#mean_value:?104??
  • ????#mean_value:?117??
  • ????#mean_value:?123??
  • ??}??
  • ??data_param?{??
  • ????source:?"examples/imagenet/mydata_val_lmdb"?#驗證集lmdb??
  • ????batch_size:?50?#和solver中的test_iter相乘約等于驗證集大小??
  • ????backend:?LMDB??
  • ??}??
  • }??
  • ?

    2.修改輸出:

    由于Googlenet有三個輸出,所以改三個地方,其他網絡一般只有一個輸出,則改一個地方即可。

    如果是微調,那么輸出層的層名也要修改。(參數根據層名來初始化,由于輸出改了,該層參數就不對應了,因此要改名)

    layer {name: "loss1/classifier"type: "InnerProduct"bottom: "loss1/fc"top: "loss1/classifier"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 1000 #改成你的數據集類別數weight_filler {type: "xavier"}bias_filler {type: "constant"value: 0}} } layer {name: "loss2/classifier"type: "InnerProduct"bottom: "loss2/fc"top: "loss2/classifier"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 1000 #改成你的數據集類別數weight_filler {type: "xavier"}bias_filler {type: "constant"value: 0}} } layer {name: "loss3/classifier"type: "InnerProduct"bottom: "pool5/7x7_s1"top: "loss3/classifier"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 1000 #改成你的數據集類別數weight_filler {type: "xavier"}bias_filler {type: "constant"value: 0}} }

    3.打開deploy.prototxt,修改:

    layer {name: "loss3/classifier"type: "InnerProduct"bottom: "pool5/7x7_s1"top: "loss3/classifier"param {lr_mult: 1decay_mult: 1}param {lr_mult: 2decay_mult: 0}inner_product_param {num_output: 1000 #改成你的數據集類別數weight_filler {type: "xavier"}bias_filler {type: "constant"value: 0}} }

    如果是微調,該層層名和train_val.prototxt修改一致。

    接著,打開solver,修改:

    net: "examples/imagenet/bvlc_googlenet/train_val.prototxt" #路徑不要錯 test_iter: 1000 #前面已說明該值 test_interval: 4000 #迭代多少次測試一次 test_initialization: false display: 40 average_loss: 40 base_lr: 0.01 lr_policy: "step" stepsize: 320000 #迭代多少次改變一次學習率 gamma: 0.96 max_iter: 10000000 #迭代次數 momentum: 0.9 weight_decay: 0.0002 snapshot: 40000 snapshot_prefix: "examples/imagenet/bvlc_googlenet" #生成的caffemodel保存在imagenet下,形如bvlc_googlenet_iter_***.caffemodel solver_mode: GPU

    這時,我們回到caffe-master\examples\imagenet下,打開train_caffenet.sh,修改:

    (如果是微調,在腳本里加入-weights **/**/**.caffemodel即可,即用來微調的caffemodel路徑)

    [plain] #!/usr/bin/env sh./build/tools/caffe train \-solver examples/imagenet/bvlc_googlenet/solver.prototxt -gpu 0

    (如果有多個GPU,可自行選擇) 然后,在caffe-master下執行改腳本即可開始訓練:$caffe-master ./examples/imagenet/train_caffenet.sh

    訓練得到的caffemodel就可以用來做圖像分類了,此時,需要(1)得到的labels.txt,(2)得到的mydata_mean.binaryproto,(3)得到的caffemodel以及已經修改過的deploy.prototxt,共四個文件,具體過程看:http://blog.csdn.net/sinat_30071459/article/details/50974695

    參考三:

    *_train_test.prototxt,*_deploy.prototxt,*_slover.prototxt文件編寫時注意

    1、*_train_test.prototxt文件

    這是訓練與測試網絡配置文件

    (1)在數據層中 參數include{

    ???????????????????????????????? phase:TRAIN/TEST

    ?????????????????????????????}

    TRAIN與TEST不能有“...”否則會報錯,還好提示信息里,會提示哪一行出現了問題,如下圖:

    數字8就代表配置文件的第8行出現了錯誤

    (2)卷積層和全連接層相似:卷積層(Convolution),全連接層(InnerProduct,容易翻譯成內積層)相似處有兩個【1】:都有兩個param{lr_mult:1

    ?????????????????????????????????????????? decay_mult:1????????????????????????????

    ?????????????????????????????? }

    ???????????????????????????? param{lr_mult: 2

    ?????????????????????????????????????? ?decay_mult: 0????????????

    ????????????????????????????? }

    【2】:convolution_param{}與inner_product_param{}里面的參數相似,甚至相同

    今天有事,明天再續!

    續上!

    (3)平均值文件*_mean.binaryproto要放在transform_param{}里,訓練與測試數據集放在data_param{}里

    2.*_deploy.prototxt文件

    【1】*_deploy.prototxt文件的構造和*_train_test.prototxt文件的構造稍有不同首先沒有test網絡中的test模塊,只有訓練模塊

    【2】數據層的寫法和原來也有不同,更加簡潔:

    input: "data" input_dim: 1 input_dim: 3 input_dim: 32 input_dim: 32

    注意紅色部分,那是數據層的名字,沒有這個的話,第一卷積層無法找到數據,我一開始沒有加這句就報錯。下面的四個參數有點類似batch_size(1,3,32,32)里四個參數

    【3】卷積層和全連接層中weight_filler{}與bias_filler{}兩個參數不用再填寫,應為這兩個參數的值,由已經訓練好的模型*.caffemodel文件提供

    【4】輸出層的變化(1)沒有了test模塊測試精度(2)輸出層

    *_train_test.prototxt文件:

    layer{ ? name: "loss" ? type: "SoftmaxWithLoss"#注意此處與下面的不同 ? bottom: "ip2" ? bottom: "label"#注意標簽項在下面沒有了,因為下面的預測屬于哪個標簽,因此不能提供標簽 ? top: "loss" }

    *_deploy.prototxt文件:

    layer { ? name: "prob" ? type: "Softmax" ? bottom: "ip2" ? top: "prob" }

    ***注意在兩個文件中輸出層的類型都發生了變化一個是SoftmaxWithLoss,另一個是Softmax。另外為了方便區分訓練與應用輸出,訓練是輸出時是loss,應用時是prob。

    3、*_slover.prototxt

    net: "test.prototxt" #訓練網絡的配置文件 test_iter: 100 #test_iter 指明在測試階段有多上個前向過程(也就是有多少圖片)被執行。 在MNIST例子里,在網絡配置文件里已經設置test網絡的batch size=100,這里test_iter 設置為100,那在測試階段共有100*100=10000 圖片被處理 test_interval: 500 #每500次訓練迭代后,執行一次test base_lr: 0.01 #學習率初始化為0.01 momentum:0.9 #u=0.9 weight_decay:0.0005 # lr_policy: "inv" gamma: 0.0001 power: 0.75 #以上三個參數都和降低學習率有關,詳細的學習策略和計算公式見下面 // The learning rate decay policy. The currently implemented learning rate ?

    // policies are as follows: ?

    //??? - fixed: always return base_lr. ?

    //??? - step: return base_lr * gamma ^ (floor(iter / step)) ?

    //??? - exp: return base_lr * gamma ^ iter

    //??? - inv: return base_lr * (1 + gamma * iter) ^ (- power) ?

    //??? - multistep: similar to step but it allows non uniform steps defined by ?

    //????? stepvalue ?

    //??? - poly: the effective learning rate follows a polynomial decay, to be ?

    //????? zero by the max_iter. return base_lr (1 - iter/max_iter) ^ (power) ?

    //??? - sigmoid: the effective learning rate follows a sigmod decay ?

    //????? return base_lr ( 1/(1 + exp(-gamma * (iter - stepsize)))) ?

    // where base_lr, max_iter, gamma, step, stepvalue and power are defined ?

    // in the solver parameter protocol buffer, and iter is the current iteration. display:100 #每100次迭代,顯示結果 snapshot: 5000 #每5000次迭代,保存一次快照 snapshot_prefix: "path_prefix" #快照保存前綴:更準確的說是快照保存路徑+前綴,應為文件名后的名字是固定的 solver_mode:GPU #選擇解算器是用cpu還是gpu

    批處理文件編寫:

    F:/caffe/caffe-windows-master/bin/caffe.exe train --solver=C:/Users/Administrator/Desktop/caffe_test/cifar-10/cifar10_slover_prototxt --gpu=all pause

    參考四:

    ?

    06

    ?

    ?

    將train_val.prototxt 轉換成deploy.prototxt

    ?

    分類:caffe?????????????????????????

    (282)?????????????????????????????(0)?????????????????????????????????????????????????????????????????????????????????????????????????????????????????

    ?

    1.刪除輸入數據(如:type:data...inckude{phase: TRAIN}),然后添加一個數據維度描述。

    ?

  • input:?"data"???
  • input_dim:?1???
  • input_dim:?3???
  • input_dim:?224???
  • input_dim:?224??
  • force_backward:?true??
  • input: "data" input_dim: 1 input_dim: 3 input_dim: 224 input_dim: 224 force_backward: true

    ?

    ?

    2.移除最后的“loss” 和“accuracy” 層,加入“prob”層。

    ?

  • layers?{??
  • ??name:?"prob"??
  • ??type:?SOFTMAX??
  • ??bottom:?"fc8"??
  • ??top:?"prob"??
  • }??
  • layers {name: "prob"type: SOFTMAXbottom: "fc8"top: "prob" }如果train_val文件中還有其他的預處理層,就稍微復雜點。如下,在'data'層,在‘data’層和‘conv1’層(with?bottom:”data”? / top:”conv1″).?插入一個層來計算輸入數據的均值。

    ?

    ?

  • layer?{??
  • name:?“mean”??
  • type:?“Convolution”??
  • <strong>bottom:?“data”??
  • top:?“data”</strong>??
  • param?{??
  • lr_mult:?0??
  • decay_mult:?0??
  • }??
  • ??
  • …}??
  • <span style="line-height: 1.5; margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;">在deploy.prototxt文件中,“mean” 層必須保留,只是容器改變,相應的‘conv1’也要改變<span style="line-height: 24px; color: rgb(68, 68, 68); font-family: "Open Sans", Helvetica, Arial, sans-serif; font-size: 14px;">?(?<span style="margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;"><span style="line-height: 1.5; margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;">bottom:”mean”/?<span style="line-height: 24px; margin: 0px; padding: 0px; border: 0px currentcolor; vertical-align: baseline;">top:”conv1″?)。</span></span></span></span></span>
  • layer?{??
  • name:?“mean”??
  • type:?“Convolution”??
  • <strong>bottom:?“data”??
  • top:?“mean“</strong>??
  • param?{??
  • lr_mult:?0??
  • decay_mult:?0??
  • }??
  • ??
  • …} ?
  • 總結

    以上是生活随笔為你收集整理的Caffe部署中的几个train-test-solver-prototxt-deploy等说明二的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。