日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

【神经网络】(14) MnasNet 代码复现,网络解析,附Tensorflow完整代码

發布時間:2023/11/27 生活经验 32 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【神经网络】(14) MnasNet 代码复现,网络解析,附Tensorflow完整代码 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

各位同學好,今天和大家分享一下如何使用 Tensorflow 復現谷歌輕量化神經網絡 MnasNet?

通常而言,移動端(手機)和終端(安防監控、無人駕駛)上的設備計算能力有限,無法搭載龐大的神經網絡模型。我們需要減少模型參數量、減小計算量、更少的內存訪問量、更少的能耗MobileNet、ShuffleNet 等輕量化網絡就非常適合于一些計算能力受限的設備,通過犧牲少量準確率來換取更快的運算速度。

在之前的章節中,我介紹了一些經典的輕量化神經網絡模型。本篇需要用到MobileNetV1、V2的相關知識,建議大家先學習一下這兩個網絡

MobileNetV1:https://blog.csdn.net/dgvv4/article/details/123415708

MobileNetV2:https://blog.csdn.net/dgvv4/article/details/123417739


1. MobileNet 核心知識回顧

由于 MnasNet 是介于 MobileNetV2 和 MobileNetV3 之間的網絡,因此開始之前一定要熟悉?MobileNet 的一些概念。為了能幫助大家理解 MnasNet 網絡模型,先簡單復習一下 MobileNet 系列網絡的核心部件。

1.1 深度可分離卷積

深度可分離卷積(depthwise separable convolution)可理解為:由 深度卷積(depthwise convolution)和 逐點卷積(pointwise convolution)構成。

深度卷積處理長寬方向的信息。輸入圖像有多少個通道就有多少個卷積核,每個通道都用自己對應的卷積核生成一張對應的特征圖。輸入和輸出的通道數相同

逐點卷積處理跨通道方向的信息。采用的是普通卷積的方法,只不過卷積核size只有1*1一個卷積核負責所有的通道,滑動過程中對應像素值相乘再相加,輸出特征圖個數由卷積核個數決定。


1.2 逆殘差結構

MobileNetV2 使用了逆轉殘差模塊。圖片輸入,先使用1x1卷積升維;然后在高維空間下使用深度卷積;最后使用1x1卷積降維降維時使用線性激活函數。當步長為 2(下采樣階段)時,沒有殘差連接。當步長為 1(基本模塊)時,殘差連接輸入和輸出的特征圖


?2. MnasNet 網絡模型

2.1 模型創新點

(1)MnasNet 運用了多目標優化函數,兼顧了速度和精度。

真實的手機推理時間作為速度,準確度作為精度,構建多目標的優化函數,如下。T 是一個硬指標,規定的時間(如80ms內完成預測);LAT(m) 代表預測時間;w 總是小于0,表示預測時間越長計算結果越大

若?,橫軸是延誤,縱軸是目標函數。小于硬指標時,目標函數的值和延誤無關,只與準確率有關。超過硬指標時,目標函數急劇減少,此時目標函數和延誤有關。超過硬指標就會被懲罰,生成的模型更加集中。

若,不管是低于硬指標還是超過硬指標,目標函數和延誤的關系是平滑的,不會出現嚴厲的懲罰。模型搜索空間更多,搜索到更多樣的速度和精度的權衡

?(2)分層的神經架構搜索空間

將一個卷積神經網絡模型分層7個Block,每一個Block內部的結構是一樣的,但Block之間的結構不一樣,使得我們能設計網絡不同部位的深層和淺層的模型結構。

采用強化學習探索網絡最優的卷積方式、卷積核大小、SE注意力機制、跨層連接方式、每個Block中的層數

使得不同Block的結構是多樣的,使網絡具有多樣性。


2.2 模型評價

論文中將其與MobileNet做比較,速度上 MnasNet 比 MobileNet 快1.8倍在不同的延誤下,MnasNet 的準確率都要比 MobileNet 的準確率要高

不同的寬度超參數和分辨率超參數下,MnasNet 的性能都超過 MobileNet


3. 代碼復現

3.1 網絡結構

網絡結構如下圖所示。紫色卷積塊是MobileNetV1的深度可分離卷積塊,綠色卷積塊是MobileNetV2的逆殘差結構模塊,紅色卷積塊是添加了SE注意力機制后的逆殘差結構模塊。

其中,MBConv6代表逆殘差連接模塊中1*1卷積上升通道數為原圖像通道數的6倍;MBConv6模塊右側的 x2 代表該模塊重復兩次先進行1次下采樣(stride=2),再進行一次基本模塊(stride=1);有的模塊需要下采樣有的不需要,搭載網絡時需要注意。


3.2 搭建各個卷積模塊

(1)標準卷積塊

一個標準卷積塊由 卷積層Conv + 批標準化層BN + 激活函數Relu 組成

#(1)標準卷積模塊
def conv_block(input_tensor, filters, kernel_size, stride):# 普通卷積+BN+激活x = layers.Conv2D(filters = filters,  # 卷積核個數kernel_size = kernel_size,  # 卷積核sizestrides = stride,  # 步長use_bias = False,  # 有BN層就不要偏置padding = 'same')(input_tensor)  # 步長=1時特征圖size不變,步長=2時特征圖長寬減半x = layers.BatchNormalization()(x)  # 批標準化x = layers.ReLU()(x)  # relu激活函數return x  # 如果activation=False可以直接輸出結果

(2)深度可分離卷積塊

一個深度可分離卷積塊由 一個深度卷積 和 一個1*1逐點卷積 組成

#(2)深度卷積
def depthwise_conv_block(input_tensor, kernel_size, stride):# 深度卷積只處理長寬方向的空間信息,輸入輸出的通道數相同x = layers.DepthwiseConv2D(kernel_size = kernel_size,  # 卷積核sizestrides = stride,  # 步長use_bias = False,  # 有BN層不要偏置padding = 'same')(input_tensor)  # stride=1卷積過程中size不變x = layers.BatchNormalization()(x)  # 批標準化x = layers.ReLU()(x)  # 激活函數return x  # 返回深度卷積的特征圖,個數保持不變#(3)逐點卷積
def pointwise_conv_block(input_tensor, filters):# 1*1卷積只負責通道方向的信息融合,一個卷積核輸出一張特征圖x = layers.Conv2D(filters = filters,  # 卷積核個數,即輸出特征圖個數kernel_size = (1,1),  # 卷積核size=1*1,不處理長寬方向的信息strides = 1,  # 步長=1卷積過程中特征圖size不變padding = 'same',  # 卷積過程中size不變use_bias = False)(input_tensor)  # 有BN層就不要偏置x = layers.BatchNormalization()(x)  # 批標準化# 不使用relu激活函數,線性激活return x#(4)深度可分離卷積 == 深度卷積 + 逐點卷積
def sep_conv_block(input_tensor, kernel_size, stride, filters):# 深度卷積,處理長寬方向的空間信息,不關心跨通道信息x = depthwise_conv_block(input_tensor, kernel_size, stride)# 逐點卷積,處理跨通道信息,跨層信息交融x = pointwise_conv_block(x, filters)return x  # 返回深度可分離卷積輸出特征圖

(3)逆轉殘差結構模塊

先1*1卷積升維;然后在高維空間下深度卷積;最后1*1卷積降維(線性激活函數)。當步長=1并且輸出和輸出特征圖的shape相同,才能將輸入和輸出殘差連接。

#(5)深度可分離卷積的逆轉殘差模塊
# 1x1標準卷積升維N倍,然后深度卷積,再1x1逐點卷積降維
def inverted_res_block(input_tensor, expansion, kernel_size, stride, out_channel):# keras.backend.int_shape得到圖像的shape,這里只需要最后一個維度即通道維度的大小in_channel = keras.backend.int_shape(input_tensor)[-1]# 調用自定義的標準卷積函數,上升通道數x = conv_block(input_tensor,  # 輸入特征圖kernel_size = (1,1),  # 卷積核sizefilters = in_channel*expansion,  # 通道上升為原來的expansion倍stride = 1)# 調用自定義的深度卷積函數x = depthwise_conv_block(x, kernel_size=kernel_size, stride=stride)# 調用自定義的逐點卷積函數,下降通道數x = pointwise_conv_block(x, filters = out_channel)  # out_channel輸出特征圖數量# 如果步長=1,并且輸入和輸出的shape相同時,輸入和輸出殘差連接if stride == 1 and input_tensor.shape==x.shape:output = layers.Add()([input_tensor, x])return output# 如果步長=2,直接輸出逐點卷積后的結果else:return x

(4)添加了SE注意力機制后的逆殘差結構模塊

先1*1卷積升維;然后在高維空間下深度卷積;再通過SE注意力機制;最后1*1卷積降維(使用線性激活函數)。輸入和輸出shape相同時,用殘差連接

#(7)定義壓縮和激活方法SE
def squeeze_excitation(input_tensor):inputs = input_tensor   # 將特征圖復制一份squeeze = inputs.shape[-1]/2  # 將特征圖在通道維度上平分成兩份,即壓縮量為原通道的1/2excitation = inputs.shape[-1]  # 通道上升到原通道數大小# 如:[416,416,24]==>[None,24]x = layers.GlobalAveragePooling2D()(input_tensor)  # 全局平均池化# 如:[None,24]==>[None,12]x = layers.Dense(squeeze)(x)  # 全連接層,通道數減半# 激活函數,shape不變x = layers.ReLU()(x)# 如:[None,12]==>[None,24]x = layers.Dense(excitation)(x)  # 全連接層,通道數回升至原來# 激活函數,shape不變x = tf.nn.sigmoid(x)# 如:[None,24]==>[1,1,24]x = layers.Reshape(target_shape = (1,1,excitation))(x)# [416,416,24]*[1,1,24]==>[416,416,24]output = inputs * x  # 點乘,元素之間相乘,shape不變return output#(8)應用壓縮和激活方法后的深度可分離卷積的逆轉殘差模塊
def inverted_se_res_block(input_tensor, expansion, kernel_size, stride, out_channel):# 就比inverted_res_block多了一個SE層,其他都一樣# 得到輸出特征圖的通道數量in_channel = keras.backend.int_shape(input_tensor)[-1]# 1*1標準卷積模塊,通道數上升expansion倍x = conv_block(input_tensor, filters=in_channel*expansion, kernel_size=(1,1), stride=1)# 深度卷積模塊,輸出的特征圖的通道數不變x = depthwise_conv_block(x, kernel_size, stride)# SE模塊x = squeeze_excitation(x)# 逐點卷積,1*1卷積下降通道數x = pointwise_conv_block(x, filters=out_channel)# 如果步長=1,并且輸入和輸出的shape相同時,需要殘差連接輸入和輸出if stride == 1 and input_tensor.shape==x.shape:output = layers.Add()([input_tensor, x])return output# 如果步長=2,直接輸出逐點卷積結果else:return x

3.3 完整代碼展示

根據3.1網絡模型結構圖堆疊網絡各層,結合上面的解釋,代碼中每行都有注釋,有疑問的可在評論區留言。

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, Model#(1)標準卷積模塊
def conv_block(input_tensor, filters, kernel_size, stride):# 普通卷積+BN+激活x = layers.Conv2D(filters = filters,  # 卷積核個數kernel_size = kernel_size,  # 卷積核sizestrides = stride,  # 步長use_bias = False,  # 有BN層就不要偏置padding = 'same')(input_tensor)  # 步長=1時特征圖size不變,步長=2時特征圖長寬減半x = layers.BatchNormalization()(x)  # 批標準化x = layers.ReLU()(x)  # relu激活函數return x  # 如果activation=False可以直接輸出結果#(2)深度卷積
def depthwise_conv_block(input_tensor, kernel_size, stride):# 深度卷積只處理長寬方向的空間信息,輸入輸出的通道數相同x = layers.DepthwiseConv2D(kernel_size = kernel_size,  # 卷積核sizestrides = stride,  # 步長use_bias = False,  # 有BN層不要偏置padding = 'same')(input_tensor)  # stride=1卷積過程中size不變x = layers.BatchNormalization()(x)  # 批標準化x = layers.ReLU()(x)  # 激活函數return x  # 返回深度卷積的特征圖,個數保持不變#(3)逐點卷積
def pointwise_conv_block(input_tensor, filters):# 1*1卷積只負責通道方向的信息融合,一個卷積核輸出一張特征圖x = layers.Conv2D(filters = filters,  # 卷積核個數,即輸出特征圖個數kernel_size = (1,1),  # 卷積核size=1*1,不處理長寬方向的信息strides = 1,  # 步長=1卷積過程中特征圖size不變padding = 'same',  # 卷積過程中size不變use_bias = False)(input_tensor)  # 有BN層就不要偏置x = layers.BatchNormalization()(x)  # 批標準化# 不使用relu激活函數,線性激活return x#(4)深度可分離卷積 == 深度卷積 + 逐點卷積
def sep_conv_block(input_tensor, kernel_size, stride, filters):# 深度卷積,處理長寬方向的空間信息,不關心跨通道信息x = depthwise_conv_block(input_tensor, kernel_size, stride)# 逐點卷積,處理跨通道信息,跨層信息交融x = pointwise_conv_block(x, filters)return x  # 返回深度可分離卷積輸出特征圖#(5)深度可分離卷積的逆轉殘差模塊
# 1x1標準卷積升維N倍,然后深度卷積,再1x1逐點卷積降維
def inverted_res_block(input_tensor, expansion, kernel_size, stride, out_channel):# keras.backend.int_shape得到圖像的shape,這里只需要最后一個維度即通道維度的大小in_channel = keras.backend.int_shape(input_tensor)[-1]# 調用自定義的標準卷積函數,上升通道數x = conv_block(input_tensor,  # 輸入特征圖kernel_size = (1,1),  # 卷積核sizefilters = in_channel*expansion,  # 通道上升為原來的expansion倍stride = 1)# 調用自定義的深度卷積函數x = depthwise_conv_block(x, kernel_size=kernel_size, stride=stride)# 調用自定義的逐點卷積函數,下降通道數x = pointwise_conv_block(x, filters = out_channel)  # out_channel輸出特征圖數量# 如果步長=1,并且輸入和輸出的shape相同時,輸入和輸出殘差連接if stride == 1 and input_tensor.shape==x.shape:output = layers.Add()([input_tensor, x])return output# 如果步長=2,直接輸出逐點卷積后的結果else:return x#(6)一個MBConv模塊是由一個下采樣模塊(stride=2)和若干個基本模塊(stride=1)組成
def MBConv(input_tensor, expansion, kernel_size, filters, stride, num):# 一個下采樣模塊,也可能不需要下采樣x = inverted_res_block(input_tensor, expansion, kernel_size, stride, out_channel=filters)# num-1個基本模塊。num代表整個MBConv模塊包含幾個inverted_res_block模塊for _ in range(1, num):x = inverted_res_block(x, expansion, kernel_size, stride=1, out_channel=filters)return x  # 返回MBConv卷積塊的特征圖#(7)定義壓縮和激活方法SE
def squeeze_excitation(input_tensor):inputs = input_tensor   # 將特征圖復制一份squeeze = inputs.shape[-1]/2  # 將特征圖在通道維度上平分成兩份,即壓縮量為原通道的1/2excitation = inputs.shape[-1]  # 通道上升到原通道數大小# 如:[416,416,24]==>[None,24]x = layers.GlobalAveragePooling2D()(input_tensor)  # 全局平均池化# 如:[None,24]==>[None,12]x = layers.Dense(squeeze)(x)  # 全連接層,通道數減半# 激活函數,shape不變x = layers.ReLU()(x)# 如:[None,12]==>[None,24]x = layers.Dense(excitation)(x)  # 全連接層,通道數回升至原來# 激活函數,shape不變x = tf.nn.sigmoid(x)# 如:[None,24]==>[1,1,24]x = layers.Reshape(target_shape = (1,1,excitation))(x)# [416,416,24]*[1,1,24]==>[416,416,24]output = inputs * x  # 點乘,元素之間相乘,shape不變return output#(8)應用壓縮和激活方法后的深度可分離卷積的逆轉殘差模塊
def inverted_se_res_block(input_tensor, expansion, kernel_size, stride, out_channel):# 就比inverted_res_block多了一個SE層,其他都一樣# 得到輸出特征圖的通道數量in_channel = keras.backend.int_shape(input_tensor)[-1]# 1*1標準卷積模塊,通道數上升expansion倍x = conv_block(input_tensor, filters=in_channel*expansion, kernel_size=(1,1), stride=1)# 深度卷積模塊,輸出的特征圖的通道數不變x = depthwise_conv_block(x, kernel_size, stride)# SE模塊x = squeeze_excitation(x)# 逐點卷積,1*1卷積下降通道數x = pointwise_conv_block(x, filters=out_channel)# 如果步長=1,并且輸入和輸出的shape相同時,需要殘差連接輸入和輸出if stride == 1 and input_tensor.shape==x.shape:output = layers.Add()([input_tensor, x])return output# 如果步長=2,直接輸出逐點卷積結果else:return x#(9)一個MBConv_SE模塊是由一個下采樣模塊(stride=2)和若干個基本模塊(stride=1)組成
def MBConv_SE(input_tensor, expansion, kernel_size, filters, stride, num):# 一個下采樣模塊,也可能不需要下采樣x = inverted_se_res_block(input_tensor, expansion, kernel_size, stride, out_channel=filters)# num-1個基本模塊。num代表整個MBConv模塊包含幾個inverted_res_block模塊for _ in range(1, num):x = inverted_se_res_block(x, expansion, kernel_size, stride=1, out_channel=filters)return x  # 返回MBConv_SE卷積塊的特征圖#(10)搭建主干網絡
def MnasNet(input_shape, classes):# 構建網絡輸入tensorinputs = keras.Input(shape=input_shape)# [224,224,3]==>[112,112,32]x = conv_block(inputs, 32, kernel_size=(3,3), stride=2)# [112,112,32]==>[112,112,16]x = sep_conv_block(x, kernel_size=(3,3), stride=1, filters=16)# [112,112,16]==>[56,56,24]x = MBConv(x, expansion=6, kernel_size=(3,3), filters=24, stride=2, num=2)# [56,56,24]==>[28,28,40]x = MBConv_SE(x, expansion=3, kernel_size=(5,5), filters=40, stride=2, num=3)# [28,28,40]==>[14,14,80]x = MBConv(x, expansion=6, kernel_size=(3,3), filters=80, stride=2, num=4)# [14,14,80]==>[14,14,112]x = MBConv_SE(x, expansion=6, kernel_size=(3,3), filters=112, stride=1, num=2)# [14,14,112]==>[7,7,160]x = MBConv_SE(x, expansion=6, kernel_size=(5,5), filters=160, stride=2, num=3)# [7,7,160]==>[7,7,320]x = MBConv(x, expansion=6, kernel_size=(3,3), filters=320, stride=1, num=1)# 再進行一次標準卷積 [7,7,320]==>[7,7,1280]x = conv_block(x, filters=1280, kernel_size=(1,1), stride=1)# [7,7,1280]==>[None,1280]x = layers.GlobalAveragePooling2D()(x)# [None,1280]==>[None,1000]logits = layers.Dense(classes)(x)  # 為了網絡穩定,訓練時再使用Softmax函數# 完成網絡構架model = Model(inputs, logits)return model#(11)接收網絡模型
if __name__ == '__main__':model = MnasNet(input_shape=[224,224,3], classes=1000)  # 給出輸入圖像shape和分類數# 查看網絡結構model.summary()

3.4 模型架構展示

通過model.summary()查看網絡框架,參數量六百萬

Model: "model"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 224, 224, 3) 0                                            
__________________________________________________________________________________________________
conv2d (Conv2D)                 (None, 112, 112, 32) 864         input_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 112, 112, 32) 128         conv2d[0][0]                     
__________________________________________________________________________________________________
re_lu (ReLU)                    (None, 112, 112, 32) 0           batch_normalization[0][0]        
__________________________________________________________________________________________________
depthwise_conv2d (DepthwiseConv (None, 112, 112, 32) 288         re_lu[0][0]                      
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 112, 112, 32) 128         depthwise_conv2d[0][0]           
__________________________________________________________________________________________________
re_lu_1 (ReLU)                  (None, 112, 112, 32) 0           batch_normalization_1[0][0]      
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 112, 112, 16) 512         re_lu_1[0][0]                    
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 112, 112, 16) 64          conv2d_1[0][0]                   
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 112, 112, 96) 1536        batch_normalization_2[0][0]      
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 112, 112, 96) 384         conv2d_2[0][0]                   
__________________________________________________________________________________________________
re_lu_2 (ReLU)                  (None, 112, 112, 96) 0           batch_normalization_3[0][0]      
__________________________________________________________________________________________________
depthwise_conv2d_1 (DepthwiseCo (None, 56, 56, 96)   864         re_lu_2[0][0]                    
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 56, 56, 96)   384         depthwise_conv2d_1[0][0]         
__________________________________________________________________________________________________
re_lu_3 (ReLU)                  (None, 56, 56, 96)   0           batch_normalization_4[0][0]      
__________________________________________________________________________________________________
conv2d_3 (Conv2D)               (None, 56, 56, 24)   2304        re_lu_3[0][0]                    
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 56, 56, 24)   96          conv2d_3[0][0]                   
__________________________________________________________________________________________________
conv2d_4 (Conv2D)               (None, 56, 56, 144)  3456        batch_normalization_5[0][0]      
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 56, 56, 144)  576         conv2d_4[0][0]                   
__________________________________________________________________________________________________
re_lu_4 (ReLU)                  (None, 56, 56, 144)  0           batch_normalization_6[0][0]      
__________________________________________________________________________________________________
depthwise_conv2d_2 (DepthwiseCo (None, 56, 56, 144)  1296        re_lu_4[0][0]                    
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 56, 56, 144)  576         depthwise_conv2d_2[0][0]         
__________________________________________________________________________________________________
re_lu_5 (ReLU)                  (None, 56, 56, 144)  0           batch_normalization_7[0][0]      
__________________________________________________________________________________________________
conv2d_5 (Conv2D)               (None, 56, 56, 24)   3456        re_lu_5[0][0]                    
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 56, 56, 24)   96          conv2d_5[0][0]                   
__________________________________________________________________________________________________
conv2d_6 (Conv2D)               (None, 56, 56, 72)   1728        batch_normalization_8[0][0]      
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 56, 56, 72)   288         conv2d_6[0][0]                   
__________________________________________________________________________________________________
re_lu_6 (ReLU)                  (None, 56, 56, 72)   0           batch_normalization_9[0][0]      
__________________________________________________________________________________________________
depthwise_conv2d_3 (DepthwiseCo (None, 28, 28, 72)   1800        re_lu_6[0][0]                    
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 28, 28, 72)   288         depthwise_conv2d_3[0][0]         
__________________________________________________________________________________________________
re_lu_7 (ReLU)                  (None, 28, 28, 72)   0           batch_normalization_10[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d (Globa (None, 72)           0           re_lu_7[0][0]                    
__________________________________________________________________________________________________
dense (Dense)                   (None, 36)           2628        global_average_pooling2d[0][0]   
__________________________________________________________________________________________________
re_lu_8 (ReLU)                  (None, 36)           0           dense[0][0]                      
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 72)           2664        re_lu_8[0][0]                    
__________________________________________________________________________________________________
tf.math.sigmoid (TFOpLambda)    (None, 72)           0           dense_1[0][0]                    
__________________________________________________________________________________________________
reshape (Reshape)               (None, 1, 1, 72)     0           tf.math.sigmoid[0][0]            
__________________________________________________________________________________________________
tf.math.multiply (TFOpLambda)   (None, 28, 28, 72)   0           re_lu_7[0][0]                    reshape[0][0]                    
__________________________________________________________________________________________________
conv2d_7 (Conv2D)               (None, 28, 28, 40)   2880        tf.math.multiply[0][0]           
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 28, 28, 40)   160         conv2d_7[0][0]                   
__________________________________________________________________________________________________
conv2d_8 (Conv2D)               (None, 28, 28, 120)  4800        batch_normalization_11[0][0]     
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 28, 28, 120)  480         conv2d_8[0][0]                   
__________________________________________________________________________________________________
re_lu_9 (ReLU)                  (None, 28, 28, 120)  0           batch_normalization_12[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_4 (DepthwiseCo (None, 28, 28, 120)  3000        re_lu_9[0][0]                    
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 28, 28, 120)  480         depthwise_conv2d_4[0][0]         
__________________________________________________________________________________________________
re_lu_10 (ReLU)                 (None, 28, 28, 120)  0           batch_normalization_13[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 120)          0           re_lu_10[0][0]                   
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 60)           7260        global_average_pooling2d_1[0][0] 
__________________________________________________________________________________________________
re_lu_11 (ReLU)                 (None, 60)           0           dense_2[0][0]                    
__________________________________________________________________________________________________
dense_3 (Dense)                 (None, 120)          7320        re_lu_11[0][0]                   
__________________________________________________________________________________________________
tf.math.sigmoid_1 (TFOpLambda)  (None, 120)          0           dense_3[0][0]                    
__________________________________________________________________________________________________
reshape_1 (Reshape)             (None, 1, 1, 120)    0           tf.math.sigmoid_1[0][0]          
__________________________________________________________________________________________________
tf.math.multiply_1 (TFOpLambda) (None, 28, 28, 120)  0           re_lu_10[0][0]                   reshape_1[0][0]                  
__________________________________________________________________________________________________
conv2d_9 (Conv2D)               (None, 28, 28, 40)   4800        tf.math.multiply_1[0][0]         
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 28, 28, 40)   160         conv2d_9[0][0]                   
__________________________________________________________________________________________________
conv2d_10 (Conv2D)              (None, 28, 28, 120)  4800        batch_normalization_14[0][0]     
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 28, 28, 120)  480         conv2d_10[0][0]                  
__________________________________________________________________________________________________
re_lu_12 (ReLU)                 (None, 28, 28, 120)  0           batch_normalization_15[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_5 (DepthwiseCo (None, 28, 28, 120)  3000        re_lu_12[0][0]                   
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 28, 28, 120)  480         depthwise_conv2d_5[0][0]         
__________________________________________________________________________________________________
re_lu_13 (ReLU)                 (None, 28, 28, 120)  0           batch_normalization_16[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_2 (Glo (None, 120)          0           re_lu_13[0][0]                   
__________________________________________________________________________________________________
dense_4 (Dense)                 (None, 60)           7260        global_average_pooling2d_2[0][0] 
__________________________________________________________________________________________________
re_lu_14 (ReLU)                 (None, 60)           0           dense_4[0][0]                    
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 120)          7320        re_lu_14[0][0]                   
__________________________________________________________________________________________________
tf.math.sigmoid_2 (TFOpLambda)  (None, 120)          0           dense_5[0][0]                    
__________________________________________________________________________________________________
reshape_2 (Reshape)             (None, 1, 1, 120)    0           tf.math.sigmoid_2[0][0]          
__________________________________________________________________________________________________
tf.math.multiply_2 (TFOpLambda) (None, 28, 28, 120)  0           re_lu_13[0][0]                   reshape_2[0][0]                  
__________________________________________________________________________________________________
conv2d_11 (Conv2D)              (None, 28, 28, 40)   4800        tf.math.multiply_2[0][0]         
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 28, 28, 40)   160         conv2d_11[0][0]                  
__________________________________________________________________________________________________
conv2d_12 (Conv2D)              (None, 28, 28, 240)  9600        batch_normalization_17[0][0]     
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 28, 28, 240)  960         conv2d_12[0][0]                  
__________________________________________________________________________________________________
re_lu_15 (ReLU)                 (None, 28, 28, 240)  0           batch_normalization_18[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_6 (DepthwiseCo (None, 14, 14, 240)  2160        re_lu_15[0][0]                   
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 14, 14, 240)  960         depthwise_conv2d_6[0][0]         
__________________________________________________________________________________________________
re_lu_16 (ReLU)                 (None, 14, 14, 240)  0           batch_normalization_19[0][0]     
__________________________________________________________________________________________________
conv2d_13 (Conv2D)              (None, 14, 14, 80)   19200       re_lu_16[0][0]                   
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 14, 14, 80)   320         conv2d_13[0][0]                  
__________________________________________________________________________________________________
conv2d_14 (Conv2D)              (None, 14, 14, 480)  38400       batch_normalization_20[0][0]     
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (None, 14, 14, 480)  1920        conv2d_14[0][0]                  
__________________________________________________________________________________________________
re_lu_17 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_21[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_7 (DepthwiseCo (None, 14, 14, 480)  4320        re_lu_17[0][0]                   
__________________________________________________________________________________________________
batch_normalization_22 (BatchNo (None, 14, 14, 480)  1920        depthwise_conv2d_7[0][0]         
__________________________________________________________________________________________________
re_lu_18 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_22[0][0]     
__________________________________________________________________________________________________
conv2d_15 (Conv2D)              (None, 14, 14, 80)   38400       re_lu_18[0][0]                   
__________________________________________________________________________________________________
batch_normalization_23 (BatchNo (None, 14, 14, 80)   320         conv2d_15[0][0]                  
__________________________________________________________________________________________________
conv2d_16 (Conv2D)              (None, 14, 14, 480)  38400       batch_normalization_23[0][0]     
__________________________________________________________________________________________________
batch_normalization_24 (BatchNo (None, 14, 14, 480)  1920        conv2d_16[0][0]                  
__________________________________________________________________________________________________
re_lu_19 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_24[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_8 (DepthwiseCo (None, 14, 14, 480)  4320        re_lu_19[0][0]                   
__________________________________________________________________________________________________
batch_normalization_25 (BatchNo (None, 14, 14, 480)  1920        depthwise_conv2d_8[0][0]         
__________________________________________________________________________________________________
re_lu_20 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_25[0][0]     
__________________________________________________________________________________________________
conv2d_17 (Conv2D)              (None, 14, 14, 80)   38400       re_lu_20[0][0]                   
__________________________________________________________________________________________________
batch_normalization_26 (BatchNo (None, 14, 14, 80)   320         conv2d_17[0][0]                  
__________________________________________________________________________________________________
conv2d_18 (Conv2D)              (None, 14, 14, 480)  38400       batch_normalization_26[0][0]     
__________________________________________________________________________________________________
batch_normalization_27 (BatchNo (None, 14, 14, 480)  1920        conv2d_18[0][0]                  
__________________________________________________________________________________________________
re_lu_21 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_27[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_9 (DepthwiseCo (None, 14, 14, 480)  4320        re_lu_21[0][0]                   
__________________________________________________________________________________________________
batch_normalization_28 (BatchNo (None, 14, 14, 480)  1920        depthwise_conv2d_9[0][0]         
__________________________________________________________________________________________________
re_lu_22 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_28[0][0]     
__________________________________________________________________________________________________
conv2d_19 (Conv2D)              (None, 14, 14, 80)   38400       re_lu_22[0][0]                   
__________________________________________________________________________________________________
batch_normalization_29 (BatchNo (None, 14, 14, 80)   320         conv2d_19[0][0]                  
__________________________________________________________________________________________________
conv2d_20 (Conv2D)              (None, 14, 14, 480)  38400       batch_normalization_29[0][0]     
__________________________________________________________________________________________________
batch_normalization_30 (BatchNo (None, 14, 14, 480)  1920        conv2d_20[0][0]                  
__________________________________________________________________________________________________
re_lu_23 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_30[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_10 (DepthwiseC (None, 14, 14, 480)  4320        re_lu_23[0][0]                   
__________________________________________________________________________________________________
batch_normalization_31 (BatchNo (None, 14, 14, 480)  1920        depthwise_conv2d_10[0][0]        
__________________________________________________________________________________________________
re_lu_24 (ReLU)                 (None, 14, 14, 480)  0           batch_normalization_31[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_3 (Glo (None, 480)          0           re_lu_24[0][0]                   
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 240)          115440      global_average_pooling2d_3[0][0] 
__________________________________________________________________________________________________
re_lu_25 (ReLU)                 (None, 240)          0           dense_6[0][0]                    
__________________________________________________________________________________________________
dense_7 (Dense)                 (None, 480)          115680      re_lu_25[0][0]                   
__________________________________________________________________________________________________
tf.math.sigmoid_3 (TFOpLambda)  (None, 480)          0           dense_7[0][0]                    
__________________________________________________________________________________________________
reshape_3 (Reshape)             (None, 1, 1, 480)    0           tf.math.sigmoid_3[0][0]          
__________________________________________________________________________________________________
tf.math.multiply_3 (TFOpLambda) (None, 14, 14, 480)  0           re_lu_24[0][0]                   reshape_3[0][0]                  
__________________________________________________________________________________________________
conv2d_21 (Conv2D)              (None, 14, 14, 112)  53760       tf.math.multiply_3[0][0]         
__________________________________________________________________________________________________
batch_normalization_32 (BatchNo (None, 14, 14, 112)  448         conv2d_21[0][0]                  
__________________________________________________________________________________________________
conv2d_22 (Conv2D)              (None, 14, 14, 672)  75264       batch_normalization_32[0][0]     
__________________________________________________________________________________________________
batch_normalization_33 (BatchNo (None, 14, 14, 672)  2688        conv2d_22[0][0]                  
__________________________________________________________________________________________________
re_lu_26 (ReLU)                 (None, 14, 14, 672)  0           batch_normalization_33[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_11 (DepthwiseC (None, 14, 14, 672)  6048        re_lu_26[0][0]                   
__________________________________________________________________________________________________
batch_normalization_34 (BatchNo (None, 14, 14, 672)  2688        depthwise_conv2d_11[0][0]        
__________________________________________________________________________________________________
re_lu_27 (ReLU)                 (None, 14, 14, 672)  0           batch_normalization_34[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_4 (Glo (None, 672)          0           re_lu_27[0][0]                   
__________________________________________________________________________________________________
dense_8 (Dense)                 (None, 336)          226128      global_average_pooling2d_4[0][0] 
__________________________________________________________________________________________________
re_lu_28 (ReLU)                 (None, 336)          0           dense_8[0][0]                    
__________________________________________________________________________________________________
dense_9 (Dense)                 (None, 672)          226464      re_lu_28[0][0]                   
__________________________________________________________________________________________________
tf.math.sigmoid_4 (TFOpLambda)  (None, 672)          0           dense_9[0][0]                    
__________________________________________________________________________________________________
reshape_4 (Reshape)             (None, 1, 1, 672)    0           tf.math.sigmoid_4[0][0]          
__________________________________________________________________________________________________
tf.math.multiply_4 (TFOpLambda) (None, 14, 14, 672)  0           re_lu_27[0][0]                   reshape_4[0][0]                  
__________________________________________________________________________________________________
conv2d_23 (Conv2D)              (None, 14, 14, 112)  75264       tf.math.multiply_4[0][0]         
__________________________________________________________________________________________________
batch_normalization_35 (BatchNo (None, 14, 14, 112)  448         conv2d_23[0][0]                  
__________________________________________________________________________________________________
conv2d_24 (Conv2D)              (None, 14, 14, 672)  75264       batch_normalization_35[0][0]     
__________________________________________________________________________________________________
batch_normalization_36 (BatchNo (None, 14, 14, 672)  2688        conv2d_24[0][0]                  
__________________________________________________________________________________________________
re_lu_29 (ReLU)                 (None, 14, 14, 672)  0           batch_normalization_36[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_12 (DepthwiseC (None, 7, 7, 672)    16800       re_lu_29[0][0]                   
__________________________________________________________________________________________________
batch_normalization_37 (BatchNo (None, 7, 7, 672)    2688        depthwise_conv2d_12[0][0]        
__________________________________________________________________________________________________
re_lu_30 (ReLU)                 (None, 7, 7, 672)    0           batch_normalization_37[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_5 (Glo (None, 672)          0           re_lu_30[0][0]                   
__________________________________________________________________________________________________
dense_10 (Dense)                (None, 336)          226128      global_average_pooling2d_5[0][0] 
__________________________________________________________________________________________________
re_lu_31 (ReLU)                 (None, 336)          0           dense_10[0][0]                   
__________________________________________________________________________________________________
dense_11 (Dense)                (None, 672)          226464      re_lu_31[0][0]                   
__________________________________________________________________________________________________
tf.math.sigmoid_5 (TFOpLambda)  (None, 672)          0           dense_11[0][0]                   
__________________________________________________________________________________________________
reshape_5 (Reshape)             (None, 1, 1, 672)    0           tf.math.sigmoid_5[0][0]          
__________________________________________________________________________________________________
tf.math.multiply_5 (TFOpLambda) (None, 7, 7, 672)    0           re_lu_30[0][0]                   reshape_5[0][0]                  
__________________________________________________________________________________________________
conv2d_25 (Conv2D)              (None, 7, 7, 160)    107520      tf.math.multiply_5[0][0]         
__________________________________________________________________________________________________
batch_normalization_38 (BatchNo (None, 7, 7, 160)    640         conv2d_25[0][0]                  
__________________________________________________________________________________________________
conv2d_26 (Conv2D)              (None, 7, 7, 960)    153600      batch_normalization_38[0][0]     
__________________________________________________________________________________________________
batch_normalization_39 (BatchNo (None, 7, 7, 960)    3840        conv2d_26[0][0]                  
__________________________________________________________________________________________________
re_lu_32 (ReLU)                 (None, 7, 7, 960)    0           batch_normalization_39[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_13 (DepthwiseC (None, 7, 7, 960)    24000       re_lu_32[0][0]                   
__________________________________________________________________________________________________
batch_normalization_40 (BatchNo (None, 7, 7, 960)    3840        depthwise_conv2d_13[0][0]        
__________________________________________________________________________________________________
re_lu_33 (ReLU)                 (None, 7, 7, 960)    0           batch_normalization_40[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_6 (Glo (None, 960)          0           re_lu_33[0][0]                   
__________________________________________________________________________________________________
dense_12 (Dense)                (None, 480)          461280      global_average_pooling2d_6[0][0] 
__________________________________________________________________________________________________
re_lu_34 (ReLU)                 (None, 480)          0           dense_12[0][0]                   
__________________________________________________________________________________________________
dense_13 (Dense)                (None, 960)          461760      re_lu_34[0][0]                   
__________________________________________________________________________________________________
tf.math.sigmoid_6 (TFOpLambda)  (None, 960)          0           dense_13[0][0]                   
__________________________________________________________________________________________________
reshape_6 (Reshape)             (None, 1, 1, 960)    0           tf.math.sigmoid_6[0][0]          
__________________________________________________________________________________________________
tf.math.multiply_6 (TFOpLambda) (None, 7, 7, 960)    0           re_lu_33[0][0]                   reshape_6[0][0]                  
__________________________________________________________________________________________________
conv2d_27 (Conv2D)              (None, 7, 7, 160)    153600      tf.math.multiply_6[0][0]         
__________________________________________________________________________________________________
batch_normalization_41 (BatchNo (None, 7, 7, 160)    640         conv2d_27[0][0]                  
__________________________________________________________________________________________________
conv2d_28 (Conv2D)              (None, 7, 7, 960)    153600      batch_normalization_41[0][0]     
__________________________________________________________________________________________________
batch_normalization_42 (BatchNo (None, 7, 7, 960)    3840        conv2d_28[0][0]                  
__________________________________________________________________________________________________
re_lu_35 (ReLU)                 (None, 7, 7, 960)    0           batch_normalization_42[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_14 (DepthwiseC (None, 7, 7, 960)    24000       re_lu_35[0][0]                   
__________________________________________________________________________________________________
batch_normalization_43 (BatchNo (None, 7, 7, 960)    3840        depthwise_conv2d_14[0][0]        
__________________________________________________________________________________________________
re_lu_36 (ReLU)                 (None, 7, 7, 960)    0           batch_normalization_43[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_7 (Glo (None, 960)          0           re_lu_36[0][0]                   
__________________________________________________________________________________________________
dense_14 (Dense)                (None, 480)          461280      global_average_pooling2d_7[0][0] 
__________________________________________________________________________________________________
re_lu_37 (ReLU)                 (None, 480)          0           dense_14[0][0]                   
__________________________________________________________________________________________________
dense_15 (Dense)                (None, 960)          461760      re_lu_37[0][0]                   
__________________________________________________________________________________________________
tf.math.sigmoid_7 (TFOpLambda)  (None, 960)          0           dense_15[0][0]                   
__________________________________________________________________________________________________
reshape_7 (Reshape)             (None, 1, 1, 960)    0           tf.math.sigmoid_7[0][0]          
__________________________________________________________________________________________________
tf.math.multiply_7 (TFOpLambda) (None, 7, 7, 960)    0           re_lu_36[0][0]                   reshape_7[0][0]                  
__________________________________________________________________________________________________
conv2d_29 (Conv2D)              (None, 7, 7, 160)    153600      tf.math.multiply_7[0][0]         
__________________________________________________________________________________________________
batch_normalization_44 (BatchNo (None, 7, 7, 160)    640         conv2d_29[0][0]                  
__________________________________________________________________________________________________
conv2d_30 (Conv2D)              (None, 7, 7, 960)    153600      batch_normalization_44[0][0]     
__________________________________________________________________________________________________
batch_normalization_45 (BatchNo (None, 7, 7, 960)    3840        conv2d_30[0][0]                  
__________________________________________________________________________________________________
re_lu_38 (ReLU)                 (None, 7, 7, 960)    0           batch_normalization_45[0][0]     
__________________________________________________________________________________________________
depthwise_conv2d_15 (DepthwiseC (None, 7, 7, 960)    8640        re_lu_38[0][0]                   
__________________________________________________________________________________________________
batch_normalization_46 (BatchNo (None, 7, 7, 960)    3840        depthwise_conv2d_15[0][0]        
__________________________________________________________________________________________________
re_lu_39 (ReLU)                 (None, 7, 7, 960)    0           batch_normalization_46[0][0]     
__________________________________________________________________________________________________
conv2d_31 (Conv2D)              (None, 7, 7, 320)    307200      re_lu_39[0][0]                   
__________________________________________________________________________________________________
batch_normalization_47 (BatchNo (None, 7, 7, 320)    1280        conv2d_31[0][0]                  
__________________________________________________________________________________________________
conv2d_32 (Conv2D)              (None, 7, 7, 1280)   409600      batch_normalization_47[0][0]     
__________________________________________________________________________________________________
batch_normalization_48 (BatchNo (None, 7, 7, 1280)   5120        conv2d_32[0][0]                  
__________________________________________________________________________________________________
re_lu_40 (ReLU)                 (None, 7, 7, 1280)   0           batch_normalization_48[0][0]     
__________________________________________________________________________________________________
global_average_pooling2d_8 (Glo (None, 1280)         0           re_lu_40[0][0]                   
__________________________________________________________________________________________________
dense_16 (Dense)                (None, 1000)         1281000     global_average_pooling2d_8[0][0] 
==================================================================================================
Total params: 6,679,396
Trainable params: 6,645,908
Non-trainable params: 33,488
__________________________________________________________________________________________________

總結

以上是生活随笔為你收集整理的【神经网络】(14) MnasNet 代码复现,网络解析,附Tensorflow完整代码的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。