生活随笔
收集整理的這篇文章主要介紹了
用TensorFlow可视化卷积层的方法
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
深度學(xué)習(xí)中對(duì)于卷積層的可視化可以幫助理解卷積層的工作原理與訓(xùn)練狀態(tài),然而卷積層可視化的方法不只一種。最簡(jiǎn)單的方法即直接輸出卷積核和卷積后的filter通道,成為圖片。然而也有一些方法試圖通過反卷積(轉(zhuǎn)置卷積)了解卷積層究竟看到了什么。
在TensorFlow中,即使是最簡(jiǎn)單的直接輸出卷積層的方法,網(wǎng)上的講解也參差不齊,David 9 今天要把可運(yùn)行的方法告訴大家,以免大家受到誤導(dǎo)。
廢話少說,最簡(jiǎn)單的方法在此:
如果你有一個(gè)卷積層,我們以Tensorflow自帶的cifar-10訓(xùn)練為例子:
with tf.variable_scope('conv1') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64], stddev=5e-2, wd=0.0) conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) pre_activation = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(pre_activation, name=scope.name) _activation_summary(conv1)
不出所料的話你一定會(huì)有以上代碼,這是第一層卷積層conv1的TensorFlow流圖定義。顯然這里conv1對(duì)象是卷積層的激活輸出。我們要做的就是直接可視化輸出。在這個(gè)scope中加上如下代碼:
with tf.variable_scope('visualization'): # scale weights to [0 1], type is still float x_min = tf.reduce_min(kernel) x_max = tf.reduce_max(kernel) kernel_0_to_1 = (kernel - x_min) / (x_max - x_min) # to tf.image_summary format [batch_size, height, width, channels] kernel_transposed = tf.transpose (kernel_0_to_1, [3, 0, 1, 2]) # this will display random 3 filters from the 64 in conv1 tf.summary.image('conv1/filters', kernel_transposed, max_outputs=3) layer1_image1 = conv1[0:1, :, :, 0:16] layer1_image1 = tf.transpose(layer1_image1, perm=[3,1,2,0]) tf.summary.image("filtered_images_layer1", layer1_image1, max_outputs=16)
即總體變?yōu)?#xff1a;
with tf.variable_scope('conv1') as scope: kernel = _variable_with_weight_decay('weights', shape=[5, 5, 3, 64], stddev=5e-2, wd=0.0) conv = tf.nn.conv2d(images, kernel, [1, 1, 1, 1], padding='SAME') biases = _variable_on_cpu('biases', [64], tf.constant_initializer(0.0)) pre_activation = tf.nn.bias_add(conv, biases) conv1 = tf.nn.relu(pre_activation, name=scope.name) _activation_summary(conv1) with tf.variable_scope('visualization'): # scale weights to [0 1], type is still float x_min = tf.reduce_min(kernel) x_max = tf.reduce_max(kernel) kernel_0_to_1 = (kernel - x_min) / (x_max - x_min) # to tf.image_summary format [batch_size, height, width, channels] kernel_transposed = tf.transpose (kernel_0_to_1, [3, 0, 1, 2]) # this will display random 3 filters from the 64 in conv1 tf.summary.image('conv1/filters', kernel_transposed, max_outputs=3) layer1_image1 = conv1[0:1, :, :, 0:16] layer1_image1 = tf.transpose(layer1_image1, perm=[3,1,2,0]) tf.summary.image("filtered_images_layer1", layer1_image1, max_outputs=16)
加入的功能是在TensorBoard中隨機(jī)顯示3張卷積核,并且,顯示16張卷積后的輸出filter通道。
知道講解的是,這里的tf.transpose()方法,是轉(zhuǎn)置方法。
tf.transpose(layer1_image1, perm=[3,1,2,0])
這句代碼表示把第0維和第3維調(diào)換,因?yàn)閳D片輸出函數(shù)
tf.summary.image()
需要輸入維度的格式是(batch數(shù),長(zhǎng),寬,彩色通道),而剛才卷積輸出得到的是(batch數(shù),長(zhǎng),寬,卷積通道), 現(xiàn)在的彩色通道是應(yīng)該是空,現(xiàn)在batch數(shù)應(yīng)該是剛才卷積輸出的彩色通道數(shù)。
總之加了以上visualization 的scope之后,就能實(shí)時(shí)跑了。親測(cè)可用。輸出樣例如下:
?
參考文獻(xiàn):
http://stackoverflow.com/questions/35759220/how-to-visualize-learned-filters-on-tensorflow https://github.com/tensorflow/tensorflow/issues/842 https://github.com/yosinski/deep-visualization-toolbox https://github.com/tensorflow/tensorflow/issues/908 https://medium.com/@awjuliani/visualizing-neural-network-layer-activation-tensorflow-tutorial-d45f8bf7bbc4 https://gist.github.com/kukuruza/03731dc494603ceab0c5
source:?http://nooverfit.com/wp/%E7%94%A8tensorflow%E5%8F%AF%E8%A7%86%E5%8C%96%E5%8D%B7%E7%A7%AF%E5%B1%82%E7%9A%84%E6%96%B9%E6%B3%95/#comment-900
與50位技術(shù)專家面對(duì)面20年技術(shù)見證,附贈(zèng)技術(shù)全景圖
總結(jié)
以上是生活随笔為你收集整理的用TensorFlow可视化卷积层的方法的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。