日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 人文社科 > 生活经验 >内容正文

生活经验

Caffe实现概述

發(fā)布時(shí)間:2023/11/28 生活经验 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Caffe实现概述 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

Caffe實(shí)現(xiàn)概述
目錄
一、caffe配置文件介紹
二、標(biāo)準(zhǔn)層的定義
三、網(wǎng)絡(luò)微調(diào)技巧
四、Linux腳本使用及LMDB文件生成
五、帶你設(shè)計(jì)一個(gè)Caffe網(wǎng)絡(luò),用于分類任務(wù)
一、caffe配置文件介紹





二、標(biāo)準(zhǔn)層的定義

三、網(wǎng)絡(luò)微調(diào)技巧

其中,multistep最為常用


四、Linux腳本使用及LMDB文件生成

五、帶你設(shè)計(jì)一個(gè)Caffe網(wǎng)絡(luò),用于分類任務(wù)

下面:
使用pycaffe生成solver配置
使用pycaffe生成caffe測(cè)試網(wǎng)絡(luò)和訓(xùn)練網(wǎng)絡(luò)

數(shù)據(jù)集下載:

demoCaffe

數(shù)據(jù)集下載,cifar mnist:
百度云盤:
鏈接: https://pan.baidu.com/s/1bHFQUz7Q6BMBZv25AhsXKQ 密碼: dva9
鏈接: https://pan.baidu.com/s/1rPRjf2hanlYYjBQQDmIjNQ 密碼: 5nhv

  1. lmdb數(shù)據(jù)制作:
    手動(dòng)實(shí)現(xiàn): https://blog.csdn.net/yx2017/article/details/72953537
    https://www.jianshu.com/p/9d7ed35960cb
    代碼實(shí)現(xiàn):https://www.cnblogs.com/leemo-o/p/4990021.html
    https://www.jianshu.com/p/ef84715e0fdc
    以下僅供對(duì)比閱讀:
    demo_lmdb.py: 生成lmdb格式數(shù)據(jù)

  2. import lmdb

  3. import numpy as np

  4. import cv2

  5. import caffe

  6. from caffe.proto import caffe_pb2

  7. def write():

  8.  # basic setting
    
  9. lmdb_file = 'lmdb_data'
    
  10. batch_size = 256
    
  11. lmdb_env = lmdb.open(lmdb_file, map_size = int(1e12))
    
  12. lmdb_txn = lmdb_env.begin(write = True)
    
  13. for x in range(batch_size):
    
  14.     data = np.ones((3, 64, 64), np.uint8)
    
  15.     label = x
    
  16.     datum = caffe.io.array_to_datum(data,label)
    
  17.     keystr = "{:0>8d}".format(x)
    
  18.     lmdb_txn.put(keystr, datum.SerializeToString())
    
  19. lmdb_txn.commit()
    
  20. def read():

  21. lmdb_env = lmdb.open('lmdb_data')
    
  22. lmdb_txt = lmdb_env.begin()
    
  23. datum = caffe_pb2.Datum()
    
  24. for key, value in lmdb_txt.cursor():
    
  25.     datum.ParseFromString(value)
    
  26.     label = datum.label
    
  27.     data = caffe.io.datum_to_array(datum)
    
  28.     print(label)
    
  29.     print(data)
    
  30. if name == ‘main’:

  31. write()
    
  32. read()
    

demo_create_solver.py: 生成solver配置文件

  1. from caffe.proto import caffe_pb2
  2. s = caffe_pb2.SolverParameter()
  3. s.train_net = “train.prototxt”
  4. s.test_net.append(“test.prototxt”)
  5. s.test_interval = 100
  6. s.test_iter.append(10)
  7. s.max_iter = 1000
  8. s.base_lr = 0.1
  9. s.weight_decay = 5e-4
  10. s.lr_policy = “step”
  11. s.display = 10
  12. s.snapshot = 10
  13. s.snapshot_prefix = “model”
  14. s.type = “SGD”
  15. s.solver_mode = caffe_pb2.SolverParameter.GPU
  16. with open(“net/s.prototxt”, “w”) as f:
  17. f.write(str(s))
    

結(jié)果如下:

  1. train_net: “/home/kuan/PycharmProjects/demo_cnn_net/net/train.prototxt”
  2. test_net: “/home/kuan/PycharmProjects/demo_cnn_net/net/test.prototxt”
  3. test_iter: 1000
  4. test_interval: 100
  5. base_lr: 0.10000000149
  6. display: 100
  7. max_iter: 100000
  8. lr_policy: “step”
  9. weight_decay: 0.000500000023749
  10. snapshot: 100
  11. snapshot_prefix: “/home/kuan/PycharmProjects/demo_cnn_net/cnn_model/mnist/lenet/”
  12. solver_mode: GPU
  13. type: “SGD”
    demo_creat_net.py: 創(chuàng)建網(wǎng)絡(luò)
  14. import caffe
  15. def create_net():
  16.  net = caffe.NetSpec()
    
  17.  net.data, net.label = caffe.layers.Data(source="data.lmdb",
    
  18.                                          backend=caffe.params.Data.LMDB,
    
  19.                                          batch_size=32,
    
  20.                                          ntop=2,  #數(shù)據(jù)層數(shù)據(jù)個(gè)數(shù),分別為data,label
    
  21.                                         transform_param=dict(crop_size=40, mirror=True)
    
  22.                                         )
    
  23. net.conv1 = caffe.layers.Convolution(net.data, num_output=20, kernel_size=5,
    
  24.                                      weight_filler={"type": "xavier"},
    
  25.                                      bias_filler={"type":"xavier"})  #卷積核參數(shù)
    
  26. net.relu1 = caffe.layers.ReLU(net.conv1, in_place=True)
    
  27. net.pool1 = caffe.layers.Pooling(net.relu1, pool=caffe.params.Pooling.MAX,
    
  28.                                  kernel_size=3, stride=2)
    
  29. net.conv2 = caffe.layers.Convolution(net.pool1, num_output=32, kernel_size=3, 
    
  30.                                      pad=1,
    
  31.                                      weight_filler={"type": "xavier"},
    
  32.                                      bias_filler={"type": "xavier"})
    
  33. net.relu2 = caffe.layers.ReLU(net.conv2, in_place=True)
    
  34. net.pool2 = caffe.layers.Pooling(net.relu2, pool=caffe.params.Pooling.MAX,
    
  35.                                  kernel_size=3, stride=2)
    
  36. #下面為全連接層
    
  37. net.fc3 = caffe.layers.InnerProduct(net.pool2, num_output=1024, weight_filler=dict(type='xavier'))
    
  38. net.relu3 = caffe.layers.ReLU(net.fc3, in_place=True)
    
  39. ##drop
    
  40. net.drop = caffe.layers.Dropout(net.relu3, dropout_param=dict(dropout_ratio=0.5))
    
  41. net.fc4 = caffe.layers.InnerProduct(net.drop, num_output=10, weight_filler=dict(type='xavier'))
    
  42. net.loss = caffe.layers.SoftmaxWithLoss(net.fc4, net.label)
    
  43. with open("net/tt.prototxt", 'w') as f:
    
  44.     f.write(str(net.to_proto()))
    
  45. if name == ‘main’:
  46. create_net()
    

生成結(jié)果如下:

  1. layer {
  2. name: “data”
  3. type: “Data”
  4. top: “data”
  5. top: “l(fā)abel”
  6. transform_param {
  7.  mirror: true
    
  8.  crop_size: 40
    
  9. }
  10. data_param {
  11. source: "/home/kuan/PycharmProjects/demo_cnn_net/lmdb_data"
    
  12. batch_size: 32
    
  13. backend: LMDB
    
  14. }
  15. }
  16. layer {
  17. name: “conv1”
  18. type: “Convolution”
  19. bottom: “data”
  20. top: “conv1”
  21. convolution_param {
  22. num_output: 20
    
  23. kernel_size: 5
    
  24. weight_filler {
    
  25.   type: "xavier"
    
  26. }
    
  27. bias_filler {
    
  28.   type: "xavier"
    
  29. }
    
  30. }
  31. }
  32. layer {
  33. name: “relu1”
  34. type: “ReLU”
  35. bottom: “conv1”
  36. top: “conv1”
  37. }
  38. layer {
  39. name: “pool1”
  40. type: “Pooling”
  41. bottom: “conv1”
  42. top: “pool1”
  43. pooling_param {
  44. pool: MAX
    
  45. kernel_size: 3
    
  46. stride: 2
    
  47. }
  48. }
  49. layer {
  50. name: “conv2”
  51. type: “Convolution”
  52. bottom: “pool1”
  53. top: “conv2”
  54. convolution_param {
  55. num_output: 32
    
  56. pad: 1
    
  57. kernel_size: 3
    
  58. weight_filler {
    
  59.   type: "xavier"
    
  60. }
    
  61. bias_filler {
    
  62.   type: "xavier"
    
  63. }
    
  64. }
  65. }
  66. layer {
  67. name: “relu2”
  68. type: “ReLU”
  69. bottom: “conv2”
  70. top: “conv2”
  71. }
  72. layer {
  73. name: “pool2”
  74. type: “Pooling”
  75. bottom: “conv2”
  76. top: “pool2”
  77. pooling_param {
  78. pool: MAX
    
  79. kernel_size: 3
    
  80. stride: 2
    
  81. }
  82. }
  83. layer {
  84. name: “fc3”
  85. type: “InnerProduct”
  86. bottom: “pool2”
  87. top: “fc3”
  88. inner_product_param {
  89. num_output: 1024
    
  90. weight_filler {
    
  91.   type: "xavier"
    
  92. }
    
  93. }
  94. }
  95. layer {
  96. name: “relu3”
  97. type: “ReLU”
  98. bottom: “fc3”
  99. top: “fc3”
  100. }
  101. layer {
  102.  name: "drop"
    
  103.  type: "Dropout"
    
  104.  bottom: "fc3"
    
  105.  top: "drop"
    
  106.  dropout_param {
    
  107.    dropout_ratio: 0.5
    
  108.  }
    
  109. }
  110. layer {
  111.  name: "fc4"
    
  112.  type: "InnerProduct"
    
  113.  bottom: "drop"
    
  114.  top: "fc4"
    
  115.  inner_product_param {
    
  116.    num_output: 10
    
  117.    weight_filler {
    
  118.      type: "xavier"
    
  119.    }
    
  120.  }
    
  121. }
  122. layer {
  123.  name: "loss"
    
  124.  type: "SoftmaxWithLoss"
    
  125.  bottom: "fc4"
    
  126.  bottom: "label"
    
  127.  top: "loss"
    
  128. }
    demo_train.py訓(xùn)練網(wǎng)絡(luò):
  129. import sys
  130. sys.path.append(’/home/kuan/AM-softmax_caffe/python’)
  131. import caffe
  132. solver = caffe.SGDSolver("/home/kuan/PycharmProjects/demo_cnn_net/cnn_net/alexnet/solver.prototxt")
  133. solver.solve()
    demo_test.py:測(cè)試網(wǎng)絡(luò)
  134. import sys
  135. sys.path.append(’/home/kuan/AM-softmax_caffe/python’)
  136. import caffe
  137. import numpy as np
  138. ##caffemodel deploy.prototxt
  139. deploy = “/home/kuan/PycharmProjects/demo_cnn_net/cnn_net/alexnet/deploy.prototxt”
  140. model = “/home/kuan/PycharmProjects/demo_cnn_net/cnn_model/cifar/alexnet/alexnet_iter_110.caffemodel”
  141. net = caffe.Net(deploy, model, caffe.TEST)
  142. net.blobs[“data”].data[…] = np.ones((3,32,32),np.uint8)
  143. net.forward()
  144. prob = net.blobs[“prob”].data[0]
  145. print(prob)

總結(jié)

以上是生活随笔為你收集整理的Caffe实现概述的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。