日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > 循环神经网络 >内容正文

循环神经网络

【caffe-matlab】使用matlab训练caffe及绘制loss

發(fā)布時間:2023/12/13 循环神经网络 31 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【caffe-matlab】使用matlab训练caffe及绘制loss 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

前言

此博客主要介紹如何利用matlab一步一步訓(xùn)練caffe模型,類似使用caffe.exe 的train命令。

國際慣例,參考博客:

http://caffe.berkeleyvision.org/tutorial/interfaces.html

http://www.cnblogs.com/denny402/p/5110204.html

抱怨一下:matlab的教程是真少哇,大牛們都跑去玩Python了。。。o(╯□╰)o,開更。。。。。。。。。

【注】所有專業(yè)說法請參考caffe官網(wǎng)以及其它大牛博客,博主寫博客可能有點白話文且沒那么咬文嚼字。

一、讀入模型

先去caffe主頁瞄一眼。。。。。得到一個訊息:

solver = caffe.Solver('./models/bvlc_reference_caffenet/solver.prototxt');這句話干什么的呢?讀模型。

嘗試一下,采用大家都有的mnist 中的solver,我采用了絕對路徑,讀者可采用相對路徑,無影響

【注】我的solver可能修改了,前面有一篇博客介紹了修改內(nèi)容和原因。貼一下下載地址:

lenet_solver1.prototxt:鏈接:http://pan.baidu.com/s/1qXWQrhy 密碼:we0e

lenet_train_test1.prototxt:鏈接:http://pan.baidu.com/s/1miawrxQ 密碼:ghxt

均值文件:鏈接:http://pan.baidu.com/s/1miFDNHe 密碼:48az

下面用到的mnist_data:鏈接:http://pan.baidu.com/s/1bp62Enl 密碼:royk

Google一下,感覺可能會有兩個原因?qū)е耺atlab未響應(yīng):一是dll沒有鏈接到,就跟很多人出現(xiàn)caffe.set_mode_gpu()會直接未響應(yīng)一樣;二是prototxt內(nèi)部錯誤。我不會說我折騰了一下午這個問題。

排除第一種情況,因為目前為止,使用caffe都是比較順利的,dll問題可能性不大。那就是prototxt 路徑問題了,去看prototxt是什么情況

net: "examples/mnist/lenet_train_test1.prototxt" snapshot_prefix: "examples/mnist/lenet"與路徑有關(guān)的兩句話,我們的matlab程序文件夾是E:\CaffeDev\caffe-master\matlab\demo,與這個路徑相差十萬八千里。保險起見,我的解決方法是把mnist訓(xùn)練需要的東西全都復(fù)制丟到matlab程序文件夾了。如下:


mnist_data文件夾存的是mnist數(shù)據(jù)集的lmdb文件以及l(fā)enet.prototxt,不想動手制作的去上面下載,想動手自己做的,前面有博客介紹。

移動完畢,那就得改改prototxt里面的路徑了:

lenet_solver1.prototxt

# The train/test net protocol buffer definition net: "lenet_train_test1.prototxt" # test_iter specifies how many forward passes the test should carry out. # In the case of MNIST, we have test batch size 100 and 100 test iterations, # covering the full 10,000 testing images. test_iter: 100 # Carry out testing every 500 training iterations. test_interval: 500 # The base learning rate, momentum and the weight decay of the network. base_lr: 0.01 momentum: 0.9 weight_decay: 0.0005 # The learning rate policy lr_policy: "inv" gamma: 0.0001 power: 0.75 # Display every 100 iterations display: 1 # The maximum number of iterations max_iter: 10000 # snapshot intermediate results snapshot: 5000 snapshot_prefix: "mnist_data/lenet" # solver mode: CPU or GPU solver_mode: CPU lenet_train_test1.prototxt 被修改部分
name: "LeNet" layer {name: "mnist"type: "Data"top: "data"top: "label"include {phase: TRAIN}transform_param {mean_file: "mean.binaryproto"scale: 0.00390625}data_param {source: "mnist_data/mnist_train_lmdb"batch_size: 64backend: LMDB} } layer {name: "mnist"type: "Data"top: "data"top: "label"include {phase: TEST}transform_param {mean_file: "mean.binaryproto"scale: 0.00390625}data_param {source: "mnist_data/mnist_test_lmdb"batch_size: 100backend: LMDB} }

再進行下一步操作之前,最好用bat測試一下是否能讀取到這個prototxt并訓(xùn)練,排除這一步錯誤才能進行下步工作。

接下來再去讀取模型:

addpath('..') caffe.reset_all solver = caffe.Solver('lenet_solver1.prototxt');顯示一下:

>> solversolver = Solver with properties:net: [1x1 caffe.Net]test_nets: [1x1 caffe.Net]

二、訓(xùn)練模型

2.1、一次性訓(xùn)練模型

solver.solve();這里會有一個幌子,你會發(fā)現(xiàn)運行以后matlab跟待機一樣,啥輸出都沒。我還以為出錯,分析了一下上面solver讀取的兩個net:

模型的輸入竟然是empty的,難道我們的lmdb數(shù)據(jù)沒有讀進去,然后嘗試了leveldb,以及各種改leveldb的路徑,比如添加“./”之類的,都不行。這時候便想到了一種可能性,模型載入是不讀取數(shù)據(jù)的,只有在運行時候讀取數(shù)據(jù),但是solver的solve方法是一次性訓(xùn)練模型,沒有任何輸出,matlab可能已經(jīng)在訓(xùn)練模型了。為了驗證此想法,run→吃飯→回來→觀察,果然在下面這個路徑中發(fā)現(xiàn)了訓(xùn)練好的model

snapshot_prefix: "mnist_data/lenet"

為了避免這些模型是用空數(shù)據(jù)訓(xùn)練的,使用mnist的classification_demo測試一下,竟然手寫數(shù)字都分類正確,這樣便驗證了我們的想法:solver.solve()是一次性訓(xùn)練數(shù)據(jù),不會附帶任何輸出,matlab表現(xiàn)會如死機了。

2.2訓(xùn)練模型step-by-step

依舊去官網(wǎng)找:



意思是我們可以用step命令設(shè)置每次訓(xùn)練多少次以后,可以干一下別的事情,然后再訓(xùn)練。

好,卡殼了,設(shè)置完畢step為1表示我們想在每一次迭代都取出loss和accuracy,但是然后呢?怎么繼續(xù)訓(xùn)練?找了很多教程都是Python的,受次啟發(fā),以及反復(fù)看caffe的官網(wǎng),發(fā)現(xiàn):

net.blobs('data').set_data(data); net.forward_prefilled(); prob = net.blobs('prob').get_data();給出的解釋簡單翻譯一下是:

net.forward?函數(shù)接受n維的輸入,輸出output的blob的數(shù)據(jù)

net.forward_prefilled使用的則是使用在模型中已經(jīng)存在的數(shù)據(jù)繼續(xù)訓(xùn)練,并不接受任何輸入,以及提供任何輸出。

看完這兩個解釋,思考一下,用step設(shè)置訓(xùn)練1次停一下,那么我們的數(shù)據(jù)是否依舊在blob中存著呢?

【最開始想法】那么就可以使用net.forward_prefilled做繼續(xù)訓(xùn)練的工作,嘗試一下:

%訓(xùn)練 clear clc addpath('..') caffe.reset_all solver=caffe.Solver('lenet_solver1.prototxt'); loss=[]; accuracy=[];for i=1:10000disp('.')solver.step(1);iter=solver.iter();solver.net.forward_prefilled end

更新日志2016-10-21

【試驗之后】按照上面的想法能訓(xùn)練,但是突然發(fā)現(xiàn),為什么不要backward_prefilled呢?而且,去掉solver.net.forward_prefilled也能訓(xùn)練。應(yīng)該是solver.step自動包含了forward和backward過程了,因此正式使用的訓(xùn)練代碼是:

%訓(xùn)練 clear clc addpath('..') caffe.reset_all solver=caffe.Solver('lenet_solver1.prototxt'); loss=[]; accuracy=[];for i=1:10000disp('.')solver.step(1);iter=solver.iter(); end

接下來就是取出每次迭代的loss和accuracy了,想都不用想,用blob,為了訓(xùn)練快點,我切換到GPU版本的caffe-windows去了,代碼如下:

<pre name="code" class="cpp"><pre name="code" class="cpp">%訓(xùn)練 clear clc close all format long %設(shè)置精度,caffe的損失貌似精度在小數(shù)點后面好幾位 addpath('..') caffe.reset_all%重設(shè)網(wǎng)絡(luò),否則載入兩個網(wǎng)絡(luò)會卡住 solver=caffe.Solver('lenet_solver1.prototxt'); %載入網(wǎng)絡(luò) loss=[];%記錄相鄰兩個loss accuracy=[];%記錄相鄰兩個accuracy hold on%畫圖用的 accuracy_init=0; loss_init=0; for i=1:10000solver.step(1);%每迭代一次就取一次loss和accuracyiter=solver.iter();loss=solver.net.blobs('loss').get_data();%取訓(xùn)練集的lossaccuracy=solver.test_nets.blobs('accuracy').get_data();%取驗證集的accuracy%畫loss折線圖x=[i-1,i];y=[loss_init loss];plot(x,y,'r-')drawnowloss_init=loss; end 接下來我們便得到了實時的曲線圖,每次迭代都有一個loss顯示在折線圖中。


為了避免訓(xùn)練錯誤,測試一下

E:\CaffeDev-GPU\caffe-master\Build\x64\Release\caffe.exe test --model=lenet_train_test1.prototxt -weights=mnist_data/lenet_iter_10000.caffemodel -gpu=0 pause結(jié)果如下:

I1021 21:00:07.988450 8132 net.cpp:261] This network produces output accuracy I1021 21:00:07.989449 8132 net.cpp:261] This network produces output loss I1021 21:00:07.989449 8132 net.cpp:274] Network initialization done. I1021 21:00:07.992449 8132 caffe.cpp:253] Running for 50 iterations. I1021 21:00:07.999449 8132 caffe.cpp:276] Batch 0, accuracy = 0.96 I1021 21:00:07.999449 8132 caffe.cpp:276] Batch 0, loss = 0.168208 I1021 21:00:08.002449 8132 caffe.cpp:276] Batch 1, accuracy = 0.95 I1021 21:00:08.002449 8132 caffe.cpp:276] Batch 1, loss = 0.152652 I1021 21:00:08.005450 8132 caffe.cpp:276] Batch 2, accuracy = 0.88 I1021 21:00:08.005450 8132 caffe.cpp:276] Batch 2, loss = 0.320218 I1021 21:00:08.007450 8132 caffe.cpp:276] Batch 3, accuracy = 0.92 I1021 21:00:08.008450 8132 caffe.cpp:276] Batch 3, loss = 0.320782 I1021 21:00:08.010450 8132 caffe.cpp:276] Batch 4, accuracy = 0.88 I1021 21:00:08.011451 8132 caffe.cpp:276] Batch 4, loss = 0.354194 I1021 21:00:08.013450 8132 caffe.cpp:276] Batch 5, accuracy = 0.91 I1021 21:00:08.013450 8132 caffe.cpp:276] Batch 5, loss = 0.604682 I1021 21:00:08.015450 8132 caffe.cpp:276] Batch 6, accuracy = 0.88 I1021 21:00:08.015450 8132 caffe.cpp:276] Batch 6, loss = 0.310961 I1021 21:00:08.017451 8132 caffe.cpp:276] Batch 7, accuracy = 0.95 I1021 21:00:08.017451 8132 caffe.cpp:276] Batch 7, loss = 0.18691 I1021 21:00:08.019450 8132 caffe.cpp:276] Batch 8, accuracy = 0.93 I1021 21:00:08.019450 8132 caffe.cpp:276] Batch 8, loss = 0.302631 I1021 21:00:08.022451 8132 caffe.cpp:276] Batch 9, accuracy = 0.96 I1021 21:00:08.022451 8132 caffe.cpp:276] Batch 9, loss = 0.10867 I1021 21:00:08.024451 8132 caffe.cpp:276] Batch 10, accuracy = 0.94 I1021 21:00:08.024451 8132 caffe.cpp:276] Batch 10, loss = 0.283927 I1021 21:00:08.026451 8132 caffe.cpp:276] Batch 11, accuracy = 0.91 I1021 21:00:08.027451 8132 caffe.cpp:276] Batch 11, loss = 0.389279 I1021 21:00:08.029451 8132 caffe.cpp:276] Batch 12, accuracy = 0.87 I1021 21:00:08.029451 8132 caffe.cpp:276] Batch 12, loss = 0.618325 I1021 21:00:08.031451 8132 caffe.cpp:276] Batch 13, accuracy = 0.91 I1021 21:00:08.031451 8132 caffe.cpp:276] Batch 13, loss = 0.464931 I1021 21:00:08.033452 8132 caffe.cpp:276] Batch 14, accuracy = 0.91 I1021 21:00:08.033452 8132 caffe.cpp:276] Batch 14, loss = 0.348089 I1021 21:00:08.035451 8132 caffe.cpp:276] Batch 15, accuracy = 0.88 I1021 21:00:08.035451 8132 caffe.cpp:276] Batch 15, loss = 0.45388 I1021 21:00:08.037451 8132 caffe.cpp:276] Batch 16, accuracy = 0.93 I1021 21:00:08.038452 8132 caffe.cpp:276] Batch 16, loss = 0.277403 I1021 21:00:08.040452 8132 caffe.cpp:276] Batch 17, accuracy = 0.9 I1021 21:00:08.040452 8132 caffe.cpp:276] Batch 17, loss = 0.48363 I1021 21:00:08.042453 8132 caffe.cpp:276] Batch 18, accuracy = 0.91 I1021 21:00:08.042453 8132 caffe.cpp:276] Batch 18, loss = 0.519036 I1021 21:00:08.044452 8132 caffe.cpp:276] Batch 19, accuracy = 0.88 I1021 21:00:08.045452 8132 caffe.cpp:276] Batch 19, loss = 0.364235 I1021 21:00:08.047452 8132 caffe.cpp:276] Batch 20, accuracy = 0.9 I1021 21:00:08.047452 8132 caffe.cpp:276] Batch 20, loss = 0.414757 I1021 21:00:08.049453 8132 caffe.cpp:276] Batch 21, accuracy = 0.9 I1021 21:00:08.049453 8132 caffe.cpp:276] Batch 21, loss = 0.387713 I1021 21:00:08.051452 8132 caffe.cpp:276] Batch 22, accuracy = 0.93 I1021 21:00:08.051452 8132 caffe.cpp:276] Batch 22, loss = 0.308721 I1021 21:00:08.053452 8132 caffe.cpp:276] Batch 23, accuracy = 0.93 I1021 21:00:08.053452 8132 caffe.cpp:276] Batch 23, loss = 0.328804 I1021 21:00:08.055454 8132 caffe.cpp:276] Batch 24, accuracy = 0.92 I1021 21:00:08.055454 8132 caffe.cpp:276] Batch 24, loss = 0.385196 I1021 21:00:08.058454 8132 caffe.cpp:276] Batch 25, accuracy = 0.93 I1021 21:00:08.058454 8132 caffe.cpp:276] Batch 25, loss = 0.255955 I1021 21:00:08.061453 8132 caffe.cpp:276] Batch 26, accuracy = 0.92 I1021 21:00:08.061453 8132 caffe.cpp:276] Batch 26, loss = 0.49177 I1021 21:00:08.063453 8132 caffe.cpp:276] Batch 27, accuracy = 0.89 I1021 21:00:08.064453 8132 caffe.cpp:276] Batch 27, loss = 0.366904 I1021 21:00:08.066453 8132 caffe.cpp:276] Batch 28, accuracy = 0.93 I1021 21:00:08.066453 8132 caffe.cpp:276] Batch 28, loss = 0.309272 I1021 21:00:08.068454 8132 caffe.cpp:276] Batch 29, accuracy = 0.88 I1021 21:00:08.068454 8132 caffe.cpp:276] Batch 29, loss = 0.520516 I1021 21:00:08.070453 8132 caffe.cpp:276] Batch 30, accuracy = 0.92 I1021 21:00:08.070453 8132 caffe.cpp:276] Batch 30, loss = 0.358098 I1021 21:00:08.072453 8132 caffe.cpp:276] Batch 31, accuracy = 0.94 I1021 21:00:08.072453 8132 caffe.cpp:276] Batch 31, loss = 0.157759 I1021 21:00:08.074455 8132 caffe.cpp:276] Batch 32, accuracy = 0.91 I1021 21:00:08.075454 8132 caffe.cpp:276] Batch 32, loss = 0.336977 I1021 21:00:08.077455 8132 caffe.cpp:276] Batch 33, accuracy = 0.95 I1021 21:00:08.077455 8132 caffe.cpp:276] Batch 33, loss = 0.116172 I1021 21:00:08.079454 8132 caffe.cpp:276] Batch 34, accuracy = 0.93 I1021 21:00:08.079454 8132 caffe.cpp:276] Batch 34, loss = 0.136695 I1021 21:00:08.081454 8132 caffe.cpp:276] Batch 35, accuracy = 0.89 I1021 21:00:08.082454 8132 caffe.cpp:276] Batch 35, loss = 0.648639 I1021 21:00:08.084455 8132 caffe.cpp:276] Batch 36, accuracy = 0.91 I1021 21:00:08.084455 8132 caffe.cpp:276] Batch 36, loss = 0.256923 I1021 21:00:08.086454 8132 caffe.cpp:276] Batch 37, accuracy = 0.93 I1021 21:00:08.086454 8132 caffe.cpp:276] Batch 37, loss = 0.321325 I1021 21:00:08.088454 8132 caffe.cpp:276] Batch 38, accuracy = 0.92 I1021 21:00:08.088454 8132 caffe.cpp:276] Batch 38, loss = 0.28317 I1021 21:00:08.090456 8132 caffe.cpp:276] Batch 39, accuracy = 0.9 I1021 21:00:08.090456 8132 caffe.cpp:276] Batch 39, loss = 0.352922 I1021 21:00:08.093456 8132 caffe.cpp:276] Batch 40, accuracy = 0.93 I1021 21:00:08.093456 8132 caffe.cpp:276] Batch 40, loss = 0.298536 I1021 21:00:08.095455 8132 caffe.cpp:276] Batch 41, accuracy = 0.88 I1021 21:00:08.095455 8132 caffe.cpp:276] Batch 41, loss = 0.817203 I1021 21:00:08.097455 8132 caffe.cpp:276] Batch 42, accuracy = 0.89 I1021 21:00:08.097455 8132 caffe.cpp:276] Batch 42, loss = 0.324021 I1021 21:00:08.100455 8132 caffe.cpp:276] Batch 43, accuracy = 0.92 I1021 21:00:08.100455 8132 caffe.cpp:276] Batch 43, loss = 0.270256 I1021 21:00:08.102455 8132 caffe.cpp:276] Batch 44, accuracy = 0.89 I1021 21:00:08.102455 8132 caffe.cpp:276] Batch 44, loss = 0.443635 I1021 21:00:08.104455 8132 caffe.cpp:276] Batch 45, accuracy = 0.92 I1021 21:00:08.104455 8132 caffe.cpp:276] Batch 45, loss = 0.316793 I1021 21:00:08.106456 8132 caffe.cpp:276] Batch 46, accuracy = 0.9 I1021 21:00:08.106456 8132 caffe.cpp:276] Batch 46, loss = 0.353561 I1021 21:00:08.109457 8132 caffe.cpp:276] Batch 47, accuracy = 0.94 I1021 21:00:08.109457 8132 caffe.cpp:276] Batch 47, loss = 0.304726 I1021 21:00:08.111456 8132 caffe.cpp:276] Batch 48, accuracy = 0.88 I1021 21:00:08.111456 8132 caffe.cpp:276] Batch 48, loss = 0.643014 I1021 21:00:08.113456 8132 caffe.cpp:276] Batch 49, accuracy = 0.93 I1021 21:00:08.113456 8132 caffe.cpp:276] Batch 49, loss = 0.214009 I1021 21:00:08.113456 8132 caffe.cpp:281] Loss: 0.355134 I1021 21:00:08.113456 8132 caffe.cpp:293] accuracy = 0.9134 I1021 21:00:08.113456 8132 caffe.cpp:293] loss = 0.355134 (* 1 = 0.355134 loss)


附:讀取日志文件的loss

如果使用的是dos窗口caffe -train命令訓(xùn)練,那么提取loss和accuracy就需要定向到caffe默認的日志文件去,找的方法很簡單


按時間排序,找到最近的以caffe.exe開頭的文件名稱,用notepad++打開可以看到日志信息:


讀文件的方法有很多,我用正則表達式去匹配loss信息:

先將這個記錄log的文件拷貝出來,

%my loss clear; clc; close all;train_log_file = 'caffe.exe.BINGO-PC.Bingo.log.INFO.20160924-193528.13464' ; train_interval = 100 ; test_interval = 500 ;[~, string_output] = dos(['type ' , train_log_file ]) ; pat='1 = .*? loss'; o1=regexp(string_output,pat,'start');%用'start'參數(shù)指定輸出o1為匹配正則表達式的子串的起始位置 o2=regexp(string_output,pat,'end');%用'start'參數(shù)指定輸出o1為匹配正則表達式的子串的結(jié)束位置 o3=regexp(string_output,pat,'match');%用'match'參數(shù)指定輸出o2為匹配正則表達式的子串 loss=zeros(1,size(o1,2)); for i=1:size(o1,2)loss(i)=str2num(string_output(o1(i)+4:o2(i)-5)); end plot(loss)


總結(jié)

以上是生活随笔為你收集整理的【caffe-matlab】使用matlab训练caffe及绘制loss的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。