日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

TensorFlow载入VGG并可视化每层

發布時間:2025/3/17 编程问答 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 TensorFlow载入VGG并可视化每层 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、簡介

VGG網絡在2014年的 ILSVRC localization and classification 兩個問題上分別取得了第一名和第二名。VGG網絡非常深,通常有16-19層,如果自己訓練網絡模型的話很浪費時間和計算資源。因此這里采用一種方法獲取VGG19模型的模型數據,從而能夠更快速的應用到自己的任務中來,

本文在加載模型數據的同時,還可視化圖片在網絡傳播過程中,每一層的輸出特征圖。讓我們能夠更直接的觀察網絡傳播的狀況。

運行環境為spyder,Python3.5,tensorflow1.2.1?
模型名稱為: imagenet-vgg-verydeep-19.mat 大家可以在網上下載。

二、VGG19模型結構

模型的每一層結構如下圖所示:?

三、代碼

#加載VGG19模型并可視化一張圖片前向傳播的過程中每一層的輸出#引入包import tensorflow as tfimport numpy as npimport matplotlib.pyplot as pltimport scipy.ioimport scipy.misc#定義一些函數#卷積def _conv_layer(input, weights, bias):conv = tf.nn.conv2d(input, tf.constant(weights), strides=(1, 1, 1, 1),padding='SAME')return tf.nn.bias_add(conv, bias)#池化def _pool_layer(input):return tf.nn.max_pool(input, ksize=(1, 2, 2, 1), strides=(1, 2, 2, 1),padding='SAME')#減像素均值操作def preprocess(image, mean_pixel):return image - mean_pixel#加像素均值操作def unprocess(image, mean_pixel):return image + mean_pixel#讀def imread(path):return scipy.misc.imread(path).astype(np.float)#保存def imsave(path, img):img = np.clip(img, 0, 255).astype(np.uint8)scipy.misc.imsave(path, img)print ("Functions for VGG ready")#定義VGG的網絡結構,用來存儲網絡的權重和偏置參數def net(data_path, input_image):#拿到每一層對應的參數layers = ('conv1_1', 'relu1_1', 'conv1_2', 'relu1_2', 'pool1','conv2_1', 'relu2_1', 'conv2_2', 'relu2_2', 'pool2','conv3_1', 'relu3_1', 'conv3_2', 'relu3_2', 'conv3_3','relu3_3', 'conv3_4', 'relu3_4', 'pool3','conv4_1', 'relu4_1', 'conv4_2', 'relu4_2', 'conv4_3','relu4_3', 'conv4_4', 'relu4_4', 'pool4','conv5_1', 'relu5_1', 'conv5_2', 'relu5_2', 'conv5_3','relu5_3', 'conv5_4', 'relu5_4')data = scipy.io.loadmat(data_path)#原網絡在訓練的過程中,對每張圖片三通道都執行了減均值的操作,這里也要減去均值mean = data['normalization'][0][0][0]mean_pixel = np.mean(mean, axis=(0, 1))#print(mean_pixel)#取到權重參數W和b,這里運氣好的話,可以查到VGG模型中每層的參數含義,查不到的#話可以打印出weights,然后打印每一層的shape,推出其中每一層代表的含義weights = data['layers'][0]#print(weights)net = {}current = input_image#取到w和bfor i, name in enumerate(layers):#:4的含義是只看每一層的前三個字母,從而進行判斷kind = name[:4]if kind == 'conv':kernels, bias = weights[i][0][0][0][0]# matconvnet: weights are [width, height, in_channels, out_channels]\n",# tensorflow: weights are [height, width, in_channels, out_channels]\n",#這里width和height是顛倒的,所以要做一次轉置運算kernels = np.transpose(kernels, (1, 0, 2, 3))#將bias轉換為一個維度bias = bias.reshape(-1)current = _conv_layer(current, kernels, bias)elif kind == 'relu':current = tf.nn.relu(current)elif kind == 'pool':current = _pool_layer(current)net[name] = currentassert len(net) == len(layers)return net, mean_pixel, layersprint ("Network for VGG ready")#cwd = os.getcwd()#這里用的是絕對路徑VGG_PATH = "F:/mnist/imagenet-vgg-verydeep-19.mat"#需要可視化的圖片路徑,這里是一只小貓IMG_PATH = "D:/VS2015Program/cat.jpg"input_image = imread(IMG_PATH)#獲取圖像shapeshape = (1,input_image.shape[0],input_image.shape[1],input_image.shape[2]) #開始會話with tf.Session() as sess:image = tf.placeholder('float', shape=shape)#調用net函數nets, mean_pixel, all_layers = net(VGG_PATH, image)#減均值操作(由于VGG網絡圖片傳入前都做了減均值操作,所以這里也用相同的預處理input_image_pre = np.array([preprocess(input_image, mean_pixel)])layers = all_layers # For all layers \n",# layers = ('relu2_1', 'relu3_1', 'relu4_1')\n",for i, layer in enumerate(layers):print ("[%d/%d] %s" % (i+1, len(layers), layer))features = nets[layer].eval(feed_dict={image: input_image_pre})print (" Type of 'features' is ", type(features))print (" Shape of 'features' is %s" % (features.shape,))# Plot response \n",#畫出每一層if 1:plt.figure(i+1, figsize=(10, 5))plt.matshow(features[0, :, :, 0], cmap=plt.cm.gray, fignum=i+1)plt.title("" + layer)plt.colorbar()plt.show()

四、程序運行結果

1、print(weights)的結果:?

2、程序運行最終結果:?
?
中間層數太多,這里就不展示了。程序最后兩層的可視化結果:?

總結

以上是生活随笔為你收集整理的TensorFlow载入VGG并可视化每层的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。