日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程语言 > python >内容正文

python

python 面向对象实现CNN(四)

發(fā)布時間:2024/9/20 python 33 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python 面向对象实现CNN(四) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

基礎(chǔ)的理論知識參考:https://www.zybuluo.com/hanbingtao/note/485480

下面的代碼也是基于上面文章的實現(xiàn):
整個算法分為三個步驟:

  • 前向計算每個神經(jīng)元的輸出值ajajjj表示網(wǎng)絡(luò)的第jj個神經(jīng)元,以下同);
  • 反向計算每個神經(jīng)元的誤差項δjδjδjδj在有的文獻(xiàn)中也叫做敏感度(sensitivity)。它實際上是EdEd網(wǎng)絡(luò)的損失函數(shù)對神經(jīng)元netjnetj加權(quán)輸入的偏導(dǎo)數(shù),即δj=?Ed?netjδj=?Ed?netj
  • 計算每個神經(jīng)元連接權(quán)重wjiwji的梯度(wjiwji表示從神經(jīng)元連接到神經(jīng)元jj的權(quán)重),公式為?Ed?wji=aiδj?Ed?wji=aiδj,其中aiai,表示神經(jīng)元ii<script type="math/tex" id="MathJax-Element-14">i</script>的輸出。
  • 最后,根據(jù)梯度下降法則更新每個權(quán)重即可。

    具體的細(xì)節(jié)參考上面的連接文章,這里只貼出代碼實現(xiàn):

    • 首先是activators.py文件:
    #!/usr/bin/env python # -*- coding: UTF-8 -*-import numpy as npclass ReluActivator(object):def forward(self, weighted_input):#return weighted_inputreturn max(0, weighted_input)def backward(self, output):return 1 if output > 0 else 0class IdentityActivator(object):def forward(self, weighted_input):return weighted_inputdef backward(self, output):return 1class SigmoidActivator(object):def forward(self, weighted_input):return 1.0 / (1.0 + np.exp(-weighted_input))def backward(self, output):return output * (1 - output)class TanhActivator(object):def forward(self, weighted_input):return 2.0 / (1.0 + np.exp(-2 * weighted_input)) - 1.0def backward(self, output):return 1 - output * output

    是一些基本的激活函數(shù)的實現(xiàn)

    • 下面的CNN.py文件實現(xiàn)cnn網(wǎng)絡(luò)主要的組件
    #!/usr/bin/env python # -*- coding: UTF-8 -*-import numpy as np from activators import ReluActivator, IdentityActivator# 獲取卷積區(qū)域 def get_patch(input_array, i, j, filter_width,filter_height, stride):'''從輸入數(shù)組中獲取本次卷積的區(qū)域,自動適配輸入為2D和3D的情況'''start_i = i * stridestart_j = j * strideif input_array.ndim == 2:return input_array[ start_i : start_i + filter_height, start_j : start_j + filter_width]elif input_array.ndim == 3:return input_array[:, start_i : start_i + filter_height, start_j : start_j + filter_width]# 獲取一個2D區(qū)域的最大值所在的索引 def get_max_index(array):max_i = 0max_j = 0max_value = array[0,0]for i in range(array.shape[0]):for j in range(array.shape[1]):if array[i,j] > max_value:max_value = array[i,j]max_i, max_j = i, jreturn max_i, max_j# 計算卷積:conv函數(shù)實現(xiàn)了2維和3維數(shù)組的卷積 def conv(input_array,kernel_array,output_array,stride, bias):'''計算卷積,自動適配輸入為2D和3D的情況,是在get_patch函數(shù)中判斷的'''#print 'shape 1:',np.shape(input_array)#print 'shape 2:',np.shape(kernel_array)#print 'shape 3:',np.shape(output_array)channel_number = input_array.ndim output_width = output_array.shape[1]output_height = output_array.shape[0]kernel_width = kernel_array.shape[-1]kernel_height = kernel_array.shape[-2]for i in range(output_height):for j in range(output_width):# 這里的*是np.array*np.array的對應(yīng)元素相乘#print 'get_patch:\n',get_patch(input_array, i, j, kernel_width,kernel_height, stride)#print 'kernel_array:\n',kernel_arrayoutput_array[i][j] = (get_patch(input_array, i, j, kernel_width, kernel_height, stride) * kernel_array).sum() + bias# padding函數(shù)實現(xiàn)了zero padding操作 def padding(input_array, zp): '''為數(shù)組增加Zero padding,自動適配輸入為2D和3D的情況'''if zp == 0:return input_arrayelse:# 輸入為3D時if input_array.ndim == 3:input_width = input_array.shape[2]input_height = input_array.shape[1]input_depth = input_array.shape[0]padded_array = np.zeros((input_depth, input_height + 2 * zp,input_width + 2 * zp))padded_array[:,zp : zp + input_height,zp : zp + input_width] = input_arrayreturn padded_array# # 輸入為2D時elif input_array.ndim == 2:input_width = input_array.shape[1]input_height = input_array.shape[0]padded_array = np.zeros((input_height + 2 * zp,input_width + 2 * zp))# 二維數(shù)組直接賦值padded_array[zp : zp + input_height,zp : zp + input_width] = input_array return padded_array# 對numpy數(shù)組進(jìn)行element wise操作 # element_wise_op函數(shù)實現(xiàn)了對numpy數(shù)組進(jìn)行按元素操作,并將返回值寫回到數(shù)組中 def element_wise_op(array, op):for i in np.nditer(array,op_flags=['readwrite']):i[...] = op(i)# Filter類保存了卷積層的參數(shù)以及梯度,并且實現(xiàn)了用梯度下降算法來更新參數(shù) class Filter(object):def __init__(self, width, height, depth):self.weights = np.random.uniform(-1e-4, 1e-4,(depth, height, width))self.bias = 0self.weights_grad = np.zeros(self.weights.shape)self.bias_grad = 0def __repr__(self):return 'filter weights:\n%s\nbias:\n%s' % (repr(self.weights), repr(self.bias))def get_weights(self):return self.weightsdef get_bias(self):return self.biasdef update(self, learning_rate):self.weights -= learning_rate * self.weights_gradself.bias -= learning_rate * self.bias_grad# 用ConvLayer類來實現(xiàn)一個卷積層 class ConvLayer(object):# 初始化def __init__(self, input_width, input_height, channel_number, filter_width, filter_height, filter_number, zero_padding, stride, activator,learning_rate):self.input_width = input_widthself.input_height = input_heightself.channel_number = channel_numberself.filter_width = filter_widthself.filter_height = filter_heightself.filter_number = filter_numberself.zero_padding = zero_paddingself.stride = strideself.activator = activatorself.learning_rate = learning_rate# 卷積后的Feature Map的高度和寬度self.output_width = ConvLayer.calculate_output_size(self.input_width, filter_width, zero_padding,stride)self.output_height = ConvLayer.calculate_output_size(self.input_height, filter_height, zero_padding,stride)# 把輸出的feature map用列表存起來self.output_array = np.zeros((self.filter_number, self.output_height, self.output_width))# filters的每個元素是過濾器對象self.filters = []for i in range(filter_number):self.filters.append(Filter(filter_width,filter_height, self.channel_number))# 用來確定卷積層輸出的大小@staticmethoddef calculate_output_size(input_size,filter_size, zero_padding, stride):return (input_size - filter_size + 2 * zero_padding) / stride + 1# forward方法實現(xiàn)了卷積層的前向計算 def forward(self, input_array):'''計算卷積層的輸出輸出結(jié)果保存在self.output_array'''self.input_array = input_array# 為數(shù)組增加Zero paddingself.padded_input_array = padding(input_array,self.zero_padding)for f in range(self.filter_number):filter = self.filters[f]#print 'shape of filter:',np.shape(filter.get_weights())conv(self.padded_input_array, filter.get_weights(), self.output_array[f],self.stride, filter.get_bias())element_wise_op(self.output_array,self.activator.forward)def backward(self, input_array, sensitivity_array, activator):'''計算傳遞給前一層的誤差項,以及計算每個權(quán)重的梯度前一層的誤差項保存在:self.delta_array梯度保存在:Filter對象的weights_grad'''self.forward(input_array)self.bp_sensitivity_map(sensitivity_array,activator)self.bp_gradient(sensitivity_array)def update(self):'''按照梯度下降,更新權(quán)重'''for filter in self.filters:filter.update(self.learning_rate)def bp_sensitivity_map(self, sensitivity_array,activator):'''計算傳遞到上一層的sensitivity mapsensitivity_array: 本層的sensitivity mapactivator: 上一層的激活函數(shù)'''# 處理卷積步長,對原始sensitivity map進(jìn)行擴(kuò)展expanded_array = self.expand_sensitivity_map(sensitivity_array)# full卷積,對sensitivitiy map進(jìn)行zero padding# 雖然原始輸入的zero padding單元也會獲得殘差# 但這個殘差不需要繼續(xù)向上傳遞,因此就不計算了expanded_width = expanded_array.shape[2]zp = (self.input_width + self.filter_width - 1 - expanded_width) / 2#print 'zp:',zp# 對誤差圖進(jìn)行擴(kuò)展后再進(jìn)行0填充padded_array = padding(expanded_array, zp)print 'padded_array:',np.shape(padded_array)# 初始化delta_array,用于保存?zhèn)鬟f到上一層的sensitivity mapself.delta_array = self.create_delta_array()# 對于具有多個filter的卷積層來說,最終傳遞到上一層的# sensitivity map相當(dāng)于所有的filter的sensitivity map之和for f in range(self.filter_number):filter = self.filters[f]# 將filter權(quán)重翻轉(zhuǎn)180度flipped_weights = np.array(map(lambda i: np.rot90(i, 2),filter.get_weights()))print 'flipped_weights:',np.shape(flipped_weights)# 計算與一個filter對應(yīng)的delta_arraydelta_array = self.create_delta_array()for d in range(delta_array.shape[0]):# input_array,kernel_array,output_array,stride, biasconv(padded_array[f], flipped_weights[d],delta_array[d], 1, 0)self.delta_array += delta_array# 將計算結(jié)果與激活函數(shù)的偏導(dǎo)數(shù)做element-wise乘法操作derivative_array = np.array(self.input_array)element_wise_op(derivative_array,activator.backward)self.delta_array *= derivative_arraydef bp_gradient(self, sensitivity_array):# 處理卷積步長,對原始sensitivity map進(jìn)行擴(kuò)展expanded_array = self.expand_sensitivity_map(sensitivity_array)for f in range(self.filter_number):# 計算每個權(quán)重的梯度filter = self.filters[f]for d in range(filter.weights.shape[0]):conv(self.padded_input_array[d],expanded_array[f],filter.weights_grad[d], 1, 0)# 計算偏置項的梯度filter.bias_grad = expanded_array[f].sum()def expand_sensitivity_map(self, sensitivity_array):print 'sensitivity_array:\n',sensitivity_arraydepth = sensitivity_array.shape[0]# 確定擴(kuò)展后sensitivity map的大小,計算stride為1時sensitivity map的大小expanded_width = (self.input_width - self.filter_width + 2 * self.zero_padding + 1)expanded_height = (self.input_height - self.filter_height + 2 * self.zero_padding + 1)# 構(gòu)建新的sensitivity_mapexpand_array = np.zeros((depth, expanded_height, expanded_width))# 從原始sensitivity map拷貝誤差值for i in range(self.output_height):for j in range(self.output_width):i_pos = i * self.stridej_pos = j * self.strideexpand_array[:,i_pos,j_pos] = sensitivity_array[:,i,j]print 'expand_array:\n',expand_arrayreturn expand_arraydef create_delta_array(self):return np.zeros((self.channel_number,self.input_height, self.input_width))# 池化層 class MaxPoolingLayer(object):def __init__(self, input_width, input_height, channel_number, filter_width, filter_height, stride):self.input_width = input_widthself.input_height = input_heightself.channel_number = channel_numberself.filter_width = filter_widthself.filter_height = filter_heightself.stride = strideself.output_width = (input_width - filter_width) / self.stride + 1self.output_height = (input_height -filter_height) / self.stride + 1self.output_array = np.zeros((self.channel_number,self.output_height, self.output_width))def forward(self, input_array):for d in range(self.channel_number):for i in range(self.output_height):for j in range(self.output_width):self.output_array[d,i,j] = ( get_patch(input_array[d], i, j,self.filter_width, self.filter_height, self.stride).max())def backward(self, input_array, sensitivity_array):self.delta_array = np.zeros(input_array.shape)for d in range(self.channel_number):for i in range(self.output_height):for j in range(self.output_width):patch_array = get_patch(input_array[d], i, j,self.filter_width, self.filter_height, self.stride)k, l = get_max_index(patch_array)self.delta_array[d, i * self.stride + k, j * self.stride + l] = \sensitivity_array[d,i,j]##.............................卷積層的一些測試....................................... # 卷積層前向傳播數(shù)據(jù)初始化 def init_test():a = np.array([[[0,1,1,0,2],[2,2,2,2,1],[1,0,0,2,0],[0,1,1,0,0],[1,2,0,0,2]],[[1,0,2,2,0],[0,0,0,2,0],[1,2,1,2,1],[1,0,0,0,0],[1,2,1,1,1]],[[2,1,2,0,0],[1,0,0,1,0],[0,2,1,0,1],[0,1,2,2,2],[2,1,0,0,1]]])# 假設(shè)誤差項矩陣已經(jīng)算好 b = np.array([[[0,1,1],[2,2,2],[1,0,0]],[[1,0,2],[0,0,0],[1,2,1]]])# input_width, input_height, channel_number, filter_width, filter_height, # filter_number, zero_padding, stride, activator,learning_rate cl = ConvLayer(5,5,3,3,3, 2,1,2,IdentityActivator(),0.001)cl.filters[0].weights = np.array([[[-1,1,0],[0,1,0],[0,1,1]],[[-1,-1,0],[0,0,0],[0,-1,0]],[[0,0,-1],[0,1,0],[1,-1,-1]]], dtype=np.float64)cl.filters[0].bias=1cl.filters[1].weights = np.array([[[1,1,-1],[-1,-1,1],[0,-1,1]],[[0,1,0],[-1,0,-1],[-1,1,0]],[[-1,0,0],[-1,0,1],[-1,0,0]]], dtype=np.float64)cl.filters[1].bias=1 return a, b, cl# 卷積層前向傳播測試 def test():a, b, cl = init_test()cl.forward(a)print 'cl.output_array:\n',cl.output_array# 卷積層的反向傳播測試 def test_bp():a, b, cl = init_test()cl.backward(a, b, IdentityActivator())cl.update()print 'cl.filters[0]:\n',cl.filters[0]print 'cl.filters[1]:\n',cl.filters[1]#.............................池化層的一些測試....................................... # 池化層測試數(shù)據(jù)初始化 def init_pool_test():a = np.array([[[1,1,2,4],[5,6,7,8],[3,2,1,0],[1,2,3,4]],[[0,1,2,3],[4,5,6,7],[8,9,0,1],[3,4,5,6]]], dtype=np.float64)b = np.array([[[1,2],[2,4]],[[3,5],[8,2]]], dtype=np.float64)# input_width, input_height, channel_number, filter_width, filter_height, stride mpl = MaxPoolingLayer(4,4,2,2,2,2)return a, b, mpl# 池化層測試 def test_pool():a, b, mpl = init_pool_test()mpl.forward(a)print 'input array:\n%s\noutput array:\n%s' % (a,mpl.output_array)def test_pool_bp():a, b, mpl = init_pool_test()mpl.backward(a, b)print 'input array:\n%s\nsensitivity array:\n%s\ndelta array:\n%s' % (a, b, mpl.delta_array)if __name__ == '__main__':test()test_pool()test_bp()print '................................................'test_pool_bp()# 測試np.nditer'''a = np.arange(6).reshape(2, 3) print a for x in np.nditer(a, op_flags = ['readwrite']): x[...] = 2*x print a '''

    一些基本得運(yùn)行結(jié)果:

    cl.output_array: [[[ 6. 7. 5.][ 3. -1. -1.][ 2. -1. 4.]][[ 3. -4. -7.][ 2. -3. -3.][ 1. -4. -4.]]] input array: [[[ 1. 1. 2. 4.][ 5. 6. 7. 8.][ 3. 2. 1. 0.][ 1. 2. 3. 4.]][[ 0. 1. 2. 3.][ 4. 5. 6. 7.][ 8. 9. 0. 1.][ 3. 4. 5. 6.]]] output array: [[[ 6. 8.][ 3. 4.]][[ 5. 7.][ 9. 6.]]] sensitivity_array: [[[0 1 1][2 2 2][1 0 0]][[1 0 2][0 0 0][1 2 1]]] expand_array: [[[ 0. 0. 1. 0. 1.][ 0. 0. 0. 0. 0.][ 2. 0. 2. 0. 2.][ 0. 0. 0. 0. 0.][ 1. 0. 0. 0. 0.]][[ 1. 0. 0. 0. 2.][ 0. 0. 0. 0. 0.][ 0. 0. 0. 0. 0.][ 0. 0. 0. 0. 0.][ 1. 0. 2. 0. 1.]]] padded_array: (2L, 7L, 7L) flipped_weights: (3L, 3L, 3L) flipped_weights: (3L, 3L, 3L) sensitivity_array: [[[0 1 1][2 2 2][1 0 0]][[1 0 2][0 0 0][1 2 1]]] expand_array: [[[ 0. 0. 1. 0. 1.][ 0. 0. 0. 0. 0.][ 2. 0. 2. 0. 2.][ 0. 0. 0. 0. 0.][ 1. 0. 0. 0. 0.]][[ 1. 0. 0. 0. 2.][ 0. 0. 0. 0. 0.][ 0. 0. 0. 0. 0.][ 0. 0. 0. 0. 0.][ 1. 0. 2. 0. 1.]]] cl.filters[0]: filter weights: array([[[-1.008, 0.99 , -0.009],[-0.005, 0.994, -0.006],[-0.006, 0.995, 0.996]],[[-1.004, -1.001, -0.004],[-0.01 , -0.009, -0.012],[-0.002, -1.002, -0.002]],[[-0.002, -0.002, -1.003],[-0.005, 0.992, -0.005],[ 0.993, -1.008, -1.007]]]) bias: 0.99099999999999999 cl.filters[1]: filter weights: array([[[ 9.98000000e-01, 9.98000000e-01, -1.00100000e+00],[ -1.00400000e+00, -1.00700000e+00, 9.97000000e-01],[ -4.00000000e-03, -1.00400000e+00, 9.98000000e-01]],[[ 0.00000000e+00, 9.99000000e-01, 0.00000000e+00],[ -1.00900000e+00, -5.00000000e-03, -1.00400000e+00],[ -1.00400000e+00, 1.00000000e+00, 0.00000000e+00]],[[ -1.00400000e+00, -6.00000000e-03, -5.00000000e-03],[ -1.00200000e+00, -5.00000000e-03, 9.98000000e-01],[ -1.00200000e+00, -1.00000000e-03, 0.00000000e+00]]]) bias: 0.99299999999999999 ................................................ input array: [[[ 1. 1. 2. 4.][ 5. 6. 7. 8.][ 3. 2. 1. 0.][ 1. 2. 3. 4.]][[ 0. 1. 2. 3.][ 4. 5. 6. 7.][ 8. 9. 0. 1.][ 3. 4. 5. 6.]]] sensitivity array: [[[ 1. 2.][ 2. 4.]][[ 3. 5.][ 8. 2.]]] delta array: [[[ 0. 0. 0. 0.][ 0. 1. 0. 2.][ 2. 0. 0. 0.][ 0. 0. 0. 4.]][[ 0. 0. 0. 0.][ 0. 3. 0. 5.][ 0. 8. 0. 0.][ 0. 0. 0. 2.]]]

    全連接層的實現(xiàn)和上一篇文章類似,在此就不再贅述了。至此,你已經(jīng)擁有了實現(xiàn)了一個簡單的卷積神經(jīng)網(wǎng)絡(luò)所需要的基本組件,并沒有完全實現(xiàn)一個CNN網(wǎng)絡(luò)。
    對于卷積神經(jīng)網(wǎng)絡(luò),現(xiàn)在有很多優(yōu)秀的開源實現(xiàn),因此我們并不需要真的自己去實現(xiàn)一個。這里貼出這些代碼能讓我們更深的理解卷積神經(jīng)網(wǎng)絡(luò)的原理,僅供參考學(xué)習(xí)。

    總結(jié)

    以上是生活随笔為你收集整理的python 面向对象实现CNN(四)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。