日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 人工智能 > pytorch >内容正文

pytorch

深度学习(08)-- Residual Network (ResNet)

發(fā)布時(shí)間:2023/12/13 pytorch 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习(08)-- Residual Network (ResNet) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

文章目錄

  • 目錄
    • 1.殘差網(wǎng)絡(luò)基礎(chǔ)
      • 1.1基本概念
      • 1.2VGG19、ResNet34結(jié)構(gòu)圖
      • 1.3 梯度彌散和網(wǎng)絡(luò)退化
      • 1.4 殘差塊變體
      • 1.5 ResNet模型變體
      • 1.6 Residual Network補(bǔ)充
      • 1.7 1*1卷積核(補(bǔ)充)
    • 2.殘差網(wǎng)絡(luò)介紹(何凱明)
    • 3.ResNet-50(Ng)
      • 3.1 非常深的神經(jīng)網(wǎng)絡(luò)問(wèn)題
      • 3.2 建立一個(gè)殘差網(wǎng)絡(luò)
      • 3.3first ResNet model(50layers)
      • 3.4 代碼實(shí)現(xiàn)
    • 4.ResNet 100-1000層
      • 4.1 152層

目錄

1.殘差網(wǎng)絡(luò)基礎(chǔ)

1.1基本概念

非常非常深的神經(jīng)網(wǎng)絡(luò)是很難訓(xùn)練的,因?yàn)榇嬖谔荻认Ш吞荻缺▎?wèn)題。ResNets是由殘差塊(Residual block)構(gòu)建的,首先解釋一下什么是殘差塊。
殘差塊結(jié)構(gòu):


1.2VGG19、ResNet34結(jié)構(gòu)圖

1.3 梯度彌散和網(wǎng)絡(luò)退化




1.4 殘差塊變體

自AlexNet以來(lái),state-of-the-art的CNN結(jié)構(gòu)都在不斷地變深。VGG和GoogLeNet分別有19個(gè)和22個(gè)卷積層,而AlexNet只有5個(gè)。

ResNet基本思想:引入跳過(guò)一層或多層的shortcut connection。

1.5 ResNet模型變體

(1)ResNeXt


(2)DenseNet


1.6 Residual Network補(bǔ)充


1.7 1*1卷積核(補(bǔ)充)

*1的卷積核相當(dāng)于對(duì)一個(gè)切片上的nc個(gè)單元,都應(yīng)用了一個(gè)全連接的神經(jīng)網(wǎng)絡(luò)

2.殘差網(wǎng)絡(luò)介紹(何凱明)




3.ResNet-50(Ng)

3.1 非常深的神經(jīng)網(wǎng)絡(luò)問(wèn)題

3.2 建立一個(gè)殘差網(wǎng)絡(luò)

使用 short connect的殘差塊,能很容易的學(xué)習(xí)標(biāo)識(shí)功能。在其他ResNet塊上進(jìn)行堆棧,不會(huì)損害訓(xùn)練集的性能。
(1)identity block (標(biāo)準(zhǔn)塊)


(2)convolutional block
當(dāng)輸入和輸出的尺寸不匹配時(shí),使用

3.3first ResNet model(50layers)



3.4 代碼實(shí)現(xiàn)

def identity_block(X, f, filters, stage, block):"""X -- 輸入tensor, shape=(m, n_H_prev, n_W_prev, n_C_prev)f -- identity block主路徑,中間卷積層的濾波器大小filters -- [f1,f2,f3]對(duì)應(yīng)主路徑中三個(gè)卷積層的濾波器個(gè)數(shù)stage -- 用于命名層,階段(數(shù)字表示:1/2/3/4/5...)block -- 用于命名層,每個(gè)階段中的殘差塊(字符表示:a/b/c/d/e/f...)Return X -- identity block 輸出tensor shape=(m, n_H, n_W, n_C) """"define name basis"conv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'"get filter number"F1, F2, F3 = filters"save the input value"X_shortcut = X"主路徑的第一層:conv-bn-relu"X = Conv2D(filters=F1, kernel_size=(1,1), strides=(1,1), padding='valid',name=conv_name_base+'2a', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2a')(X)X = Activation('relu')(X)"主路徑的第二層:conv-bn-relu"X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same',name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2b')(X)X = Activation('relu')(X)"主路徑的第三層:conv-bn"X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid',name=conv_name_base+'2c', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2c')(X)"add shortcut and relu"X = Add()([X, X_shortcut])X = Activation('relu')(X)return X"""s -- convolutional block中,主路徑第一個(gè)卷積層和shortcut卷積層,濾波器滑動(dòng)步長(zhǎng),s=2,用來(lái)降低圖片大小""" def convolutional_block(X, f, filters, stage, block, s=2):conv_name_base = 'res' + str(stage) + block + '_branch'bn_name_base = 'bn' + str(stage) + block + '_branch'F1, F2, F3 = filtersX_shortcut = X"1st layer"X = Conv2D(filters=F1, kernel_size=(1,1), strides=(s,s), padding='valid',name=conv_name_base+'2a', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2a')(X)X = Activation('relu')(X)"2nd layer"X = Conv2D(filters=F2, kernel_size=(f,f), strides=(1,1), padding='same',name=conv_name_base+'2b', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2b')(X)X = Activation('relu')(X)"3rd layer"X = Conv2D(filters=F3, kernel_size=(1,1), strides=(1,1), padding='valid',name=conv_name_base+'2c', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name=bn_name_base+'2c')(X)"shortcut path"X_shortcut = Conv2D(filters=F3, kernel_size=(1,1), strides=(s,s), padding='valid',name=conv_name_base+'1', kernel_initializer=glorot_uniform(seed=0))(X_shortcut)X_shortcut = BatchNormalization(axis=3, name=bn_name_base+'1')(X_shortcut)X = Add()([X, X_shortcut])X = Activation('relu')(X)return X def ResNet50(input_shape=(64,64,3), classes=6):"""input - zero padding stage1:conv - BatchNorm - relu - maxpoolstage2: conv block - identity block *2stage3: conv block - identity block *3stage4: conv block - identity block *5stage5: conv block - identity block *2avgpool - flatten - fully connectinput_shape -- 輸入圖像的shapeclasses -- 類(lèi)別數(shù)Return: a model() instance in Keras""""用input_shape定義一個(gè)輸入tensor"X_input = Input(input_shape)"zero padding"X = ZeroPadding2D((3,3))(X_input)"stage 1"X = Conv2D(filters=64, kernel_size=(7,7), strides=(2,2), padding='valid',name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name='bn_conv1')(X)X = MaxPooling2D((3,3), strides=(2,2))(X)"stage 2"X = convolutional_block(X, f=3, filters=[64,64,256], stage=2, block='a', s=1)X = identity_block(X,f=3, filters=[64,64,256], stage=2, block='b')X = identity_block(X,f=3, filters=[64,64,256], stage=2, block='c')"stage 3"X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block='a', s=2)X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='b')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='c')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='d')"stage 4"X = convolutional_block(X, f=3, filters=[256,256,1024], stage=4, block='a', s=2)X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='b')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='c')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='d')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='e')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='f')"stage 5"X = convolutional_block(X, f=3, filters=[512,512,2048], stage=5, block='a', s=2)X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block='b')X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block='c')"fully connect"X = AveragePooling2D((2,2), name='avg_pool')(X)X = Flatten()(X)X = Dense(classes, activation='softmax', name='fc'+str(classes),kernel_initializer=glorot_uniform(seed=0))(X)"a model() instance in Keras"model = Model(inputs=X_input, outputs=X, name='ResNet50')return model

4.ResNet 100-1000層

4.1 152層


代碼

def ResNet101(input_shape=(64,64,3), classes=6):"""input - zero padding stage1:conv - BatchNorm - relu - maxpoolstage2: conv block - identity block *2stage3: conv block - identity block *3stage4: conv block - identity block *22stage5: conv block - identity block *2avgpool - flatten - fully connectinput_shape -- 輸入圖像的shapeclasses -- 類(lèi)別數(shù)Return: a model() instance in Keras""""用input_shape定義一個(gè)輸入tensor"X_input = Input(input_shape)"zero padding"X = ZeroPadding2D((3,3))(X_input)"stage 1"X = Conv2D(filters=64, kernel_size=(7,7), strides=(2,2), padding='valid',name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name='bn_conv1')(X)X = MaxPooling2D((3,3), strides=(2,2))(X)"stage 2"X = convolutional_block(X, f=3, filters=[4,4,16], stage=2, block='a', s=1)X = identity_block(X,f=3, filters=[4,4,16], stage=2, block='b')X = identity_block(X,f=3, filters=[4,4,16], stage=2, block='c')"stage 3"X = convolutional_block(X, f=3, filters=[8,8,32], stage=3, block='a', s=2)X = identity_block(X, f=3, filters=[8,8,32], stage=3, block='b')X = identity_block(X, f=3, filters=[8,8,32], stage=3, block='c')X = identity_block(X, f=3, filters=[8,8,32], stage=3, block='d')"stage 4""identity block from 5(50 layers) to 22(101 layers)"X = convolutional_block(X, f=3, filters=[16,16,64], stage=4, block='a', s=2)X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='b')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='c')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='d')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='e')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='f')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='g')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='h')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='i')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='j')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='k')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='l')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='m')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='n')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='o')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='p')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='q')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='r')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='s')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='t')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='u')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='v')X = identity_block(X, f=3, filters=[16,16,64], stage=4, block='w')"stage 5"X = convolutional_block(X, f=3, filters=[32,32,128], stage=5, block='a', s=2)X = identity_block(X, f=3, filters=[32,32,128], stage=5, block='b')X = identity_block(X, f=3, filters=[32,32,128], stage=5, block='c')"fully connect"X = AveragePooling2D((2,2), name='avg_pool')(X)X = Flatten()(X)X = Dense(classes, activation='softmax', name='fc'+str(classes),kernel_initializer=glorot_uniform(seed=0))(X)"a model() instance in Keras"model = Model(inputs=X_input, outputs=X, name='ResNet101')return model def ResNet152(input_shape=(64,64,3), classes=6):"""input - zero padding stage1:conv - BatchNorm - relu - maxpoolstage2: conv block - identity block *2stage3: conv block - identity block *7stage4: conv block - identity block *35stage5: conv block - identity block *2avgpool - flatten - fully connectinput_shape -- 輸入圖像的shapeclasses -- 類(lèi)別數(shù)Return: a model() instance in Keras""""用input_shape定義一個(gè)輸入tensor"X_input = Input(input_shape)"zero padding"X = ZeroPadding2D((3,3))(X_input)"stage 1"X = Conv2D(filters=64, kernel_size=(7,7), strides=(2,2), padding='valid',name='conv1', kernel_initializer=glorot_uniform(seed=0))(X)X = BatchNormalization(axis=3, name='bn_conv1')(X)X = MaxPooling2D((3,3), strides=(2,2))(X)"stage 2"X = convolutional_block(X, f=3, filters=[64,64,256], stage=2, block='a', s=1)X = identity_block(X,f=3, filters=[64,64,256], stage=2, block='b')X = identity_block(X,f=3, filters=[64,64,256], stage=2, block='c')"stage 3""identity block from 3(101 layers) to 7(152 layers)"X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block='a', s=2)X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='b')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='c')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='d')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='e')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='f')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='g')X = identity_block(X, f=3, filters=[128,128,512], stage=3, block='h')"stage 4""identity block from 22(101 layers) to 35(152 layers)"X = convolutional_block(X, f=3, filters=[256,256,1024], stage=4, block='a', s=2)X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='b')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='c')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='d')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='e')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='f')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='g')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='h')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='i')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='j')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='k')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='l')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='m')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='n')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='o')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='p')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='q')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='r')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='s')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='t')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='u')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='v')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='w')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='x')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='y')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z1')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z2')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z3')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z4')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z5')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z6')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z7')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z8')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z9')X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block='z10')"stage 5"X = convolutional_block(X, f=3, filters=[512,512,2048], stage=5, block='a', s=2)X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block='b')X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block='c')"fully connect"X = AveragePooling2D((2,2), name='avg_pool')(X)X = Flatten()(X)X = Dense(classes, activation='softmax', name='fc'+str(classes),kernel_initializer=glorot_uniform(seed=0))(X)"a model() instance in Keras"model = Model(inputs=X_input, outputs=X, name='ResNet152')return model

總結(jié)

以上是生活随笔為你收集整理的深度学习(08)-- Residual Network (ResNet)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。