Caffe —— Deep learning in Practice 深度学习实践
因工作交接需要, 要將caffe使用方法及整體結(jié)構(gòu)描述清楚。 鑒于也有同學(xué)問(wèn)過(guò)我相關(guān)內(nèi)容, 決定在本文中寫個(gè)簡(jiǎn)單的tutorial, 方便大家參考。?
本文簡(jiǎn)單的講幾個(gè)事情:
- Caffe能做什么?
- 為什么選擇caffe?
- 環(huán)境
- 整體結(jié)構(gòu)
- Protocol buffer
- 訓(xùn)練基本流程
- Python中訓(xùn)練
- Debug
Caffe能做什么?
- 定義網(wǎng)絡(luò)結(jié)構(gòu)
- 訓(xùn)練網(wǎng)絡(luò)
- C++/CUDA 寫的結(jié)構(gòu)
- cmd/python/Matlab接口
- CPU/GPU工作模式
- 給了一些參考模型&pretrain了的weights
為什么選擇caffe?
- 模塊化做的好
- 簡(jiǎn)單:修改結(jié)構(gòu)無(wú)需該代碼
- 開(kāi)源:共同維護(hù)開(kāi)源代碼
環(huán)境:
-
$ lsb_release -a?
Distributor ID: Ubuntu?
Description: Ubuntu 12.04.4 LTS?
Release: 12.04?
Codename: precise -
$ cat /proc/version?
Linux version 3.2.0-29-generic (buildd@allspice) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #46-Ubuntu SMP Fri Jul 27 17:03:23 UTC 2012 -
Vim + Taglist + Cscope
整體結(jié)構(gòu):
定義CAFFE為caffe跟目錄,caffe的核心代碼都在$CAFFE/src/caffe 下,主要有以下部分:net, blob, layer, solver.
-
net.cpp:?
net定義網(wǎng)絡(luò), 整個(gè)網(wǎng)絡(luò)中含有很多l(xiāng)ayers, net.cpp負(fù)責(zé)計(jì)算整個(gè)網(wǎng)絡(luò)在訓(xùn)練中的forward, backward過(guò)程, 即計(jì)算forward/backward 時(shí)各layer的gradient。
-
layers:?
在$CAFFE/src/caffe/layers中的層,在protobuffer (.proto文件中定義message類型,.prototxt或.binaryproto文件中定義message的值) 中調(diào)用時(shí)包含屬性name, type(data/conv/pool…), connection structure (input blobs and output blobs),layer-specific parameters(如conv層的kernel大小)。定義一個(gè)layer需要定義其setup, forward 和backward過(guò)程。
-
blob.cpp:?
net中的數(shù)據(jù)和求導(dǎo)結(jié)果通過(guò)4維的blob傳遞。一個(gè)layer有很多blobs, e.g,- 對(duì)data,weight blob大小為Number * Channels * Height * Width, 如256*3*224*224;
- 對(duì)conv層,weight blob大小為 Output 節(jié)點(diǎn)數(shù) * Input 節(jié)點(diǎn)數(shù) * Height * Width,如AlexNet第一個(gè)conv層的blob大小為96 x 3 x 11 x 11;
- 對(duì)inner product 層, weight blob大小為 1 * 1 * Output節(jié)點(diǎn)數(shù) * Input節(jié)點(diǎn)數(shù); bias blob大小為1 * 1 * 1 * Output節(jié)點(diǎn)數(shù)(?conv層和inner product層一樣,也有weight和bias,所以在網(wǎng)絡(luò)結(jié)構(gòu)定義中我們會(huì)看到兩個(gè)blobs_lr,第一個(gè)是weights的,第二個(gè)是bias的。類似地,weight_decay也有兩個(gè),一個(gè)是weight的,一個(gè)是bias的);?
blob中,mutable_cpu/gpu_data() 和cpu/gpu_data()用來(lái)管理memory,cpu/gpu_diff()和 mutable_cpu/gpu_diff()用來(lái)計(jì)算求導(dǎo)結(jié)果。
-
slover.cpp:?
結(jié)合loss,用gradient更新weights。主要函數(shù):?
Init(),?
Solve(),?
ComputeUpdateValue(),?
Snapshot(), Restore(),//快照(拷貝)與恢復(fù) 網(wǎng)絡(luò)state?
Test();
在solver.cpp中有3中solver,即3個(gè)類:AdaGradSolver,?SGDSolver和NesterovSolver可供選擇。
關(guān)于loss,可以同時(shí)有多個(gè)loss,可以加regularization(L1/L2);
Protocol buffer:
上面已經(jīng)將過(guò), protocol buffer在 .proto文件中定義message類型,.prototxt或.binaryproto文件中定義message的值;
Caffe?
Caffe的所有message定義在$CAFFE/src/caffe/proto/caffe.proto中。
Experiment?
在實(shí)驗(yàn)中,主要用到兩個(gè)protocol buffer: solver的和model的,分別定義solver參數(shù)(學(xué)習(xí)率啥的)和model結(jié)構(gòu)(網(wǎng)絡(luò)結(jié)構(gòu))。
技巧:
- 凍結(jié)一層不參與訓(xùn)練:設(shè)置其blobs_lr=0
- 對(duì)于圖像,讀取數(shù)據(jù)盡量別用HDF5Layer(因?yàn)橹荒艽鎓loat32和float64,不能用uint8, 所以太費(fèi)空間)
訓(xùn)練基本流程:
法一,轉(zhuǎn)換成caffe接受的格式:lmdb, leveldb, hdf5 / .mat, list of images, etc.;法二,自己寫數(shù)據(jù)讀取層(如https://github.com/tnarihi/tnarihi-caffe-helper/blob/master/python/caffe_helper/layers/data_layers.py)
在Python中訓(xùn)練:?
Document & Examples:?https://github.com/BVLC/caffe/pull/1733?
核心code:
- $CAFFE/python/caffe/_caffe.cpp?
定義Blob, Layer, Net, Solver類 - $CAFFE/python/caffe/pycaffe.py?
Net類的增強(qiáng)功能
Debug:
- 在Make.config中設(shè)置DEBUG := 1
- 在solver.prototxt中設(shè)置debug_info: true
- 在python/Matlab中察看forward & backward一輪后weights的變化
經(jīng)典文獻(xiàn):?
[ DeCAF ] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. Decaf: A deep convolutional activation feature for generic visual recognition. ICML, 2014.?
[ R-CNN ] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CVPR, 2014.?
[ Zeiler-Fergus Visualizing] M. Zeiler and R. Fergus. visualizing and understanding convolutional networks. ECCV, 2014.?
[ LeNet ] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. IEEE, 1998.?
[ AlexNet ] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. NIPS, 2012.?
[ OverFeat ] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. ICLR, 2014.?
[ Image-Style (Transfer learning) ] S. Karayev, M. Trentacoste, H. Han, A. Agarwala, T. Darrell, A. Hertzmann, H. Winnemoeller. Recognizing Image Style. BMVC, 2014.?
[ Karpathy14 ] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convolutional neural networks. CVPR, 2014.?
[ Sutskever13 ] I. Sutskever. Training Recurrent Neural Networks. PhD thesis, University of Toronto, 2013.?
[ Chopra05 ] S. Chopra, R. Hadsell, and Y. LeCun. Learning a similarity metric discriminatively, with application to face verification. CVPR, 2005.
from:?http://blog.csdn.net/abcjennifer/article/details/46424949
總結(jié)
以上是生活随笔為你收集整理的Caffe —— Deep learning in Practice 深度学习实践的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 有哪些好的 LaTeX 编辑器?
- 下一篇: 深度学习caffe的代码怎么读?