日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

第十六节,使用函数封装库tf.contrib.layers

發布時間:2023/11/28 生活经验 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 第十六节,使用函数封装库tf.contrib.layers 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

目錄

  • 一?tf.contrib.layers中的具體函數介紹
    • 1.tf.contrib.layers.conv2d()函數的定義如下:
    • 2.tf.contrib.layers.max_pool2d()函數的定義如下:
    • 3.tf.contrib.layers.avg_pool2d()函數定義
    • 4.tf.contrib.layers.fully_connected()函數的定義如下:
  • 二 改寫cifar10分類

?


這一節,介紹TensorFlow中的一個封裝好的高級庫,里面有前面講過的很多函數的高級封裝,使用這個高級庫來開發程序將會提高效率。

我們改寫第十三節的程序,卷積函數我們使用tf.contrib.layers.conv2d(),池化函數使用tf.contrib.layers.max_pool2d()和tf.contrib.layers.avg_pool2d(),全連接函數使用tf.contrib.layers.fully_connected()。

回到頂部

一?tf.contrib.layers中的具體函數介紹

1.tf.contrib.layers.conv2d()函數的定義如下:

def convolution(inputs,num_outputs,kernel_size,stride=1,padding='SAME',data_format=None,rate=1,activation_fn=nn.relu,normalizer_fn=None,normalizer_params=None,weights_initializer=initializers.xavier_initializer(),weights_regularizer=None,biases_initializer=init_ops.zeros_initializer(),biases_regularizer=None,reuse=None,variables_collections=None,outputs_collections=None,trainable=True,scope=None):

常用的參數說明如下:

  • inputs:形狀為[batch_size, height, width, channels]的輸入。
  • num_outputs:代表輸出幾個channel。這里不需要再指定輸入的channel了,因為函數會自動根據inpus的shpe去判斷。
  • kernel_size:卷積核大小,不需要帶上batch和channel,只需要輸入尺寸即可。[5,5]就代表5x5的卷積核,如果長和寬都一樣,也可以只寫一個數5.
  • stride:步長,默認是長寬都相等的步長。卷積時,一般都用1,所以默認值也是1.如果長和寬都不相等,也可以用一個數組[1,2]。
  • padding:填充方式,'SAME'或者'VALID'。
  • activation_fn:激活函數。默認是ReLU。也可以設置為None
  • weights_initializer:權重的初始化,默認為initializers.xavier_initializer()函數。
  • weights_regularizer:權重正則化項,可以加入正則函數。biases_initializer:偏置的初始化,默認為init_ops.zeros_initializer()函數。
  • biases_regularizer:偏置正則化項,可以加入正則函數。
  • trainable:是否可訓練,如作為訓練節點,必須設置為True,默認即可。如果我們是微調網絡,有時候需要凍結某一層的參數,則設置為False。

2.tf.contrib.layers.max_pool2d()函數的定義如下:

def max_pool2d(inputs,kernel_size,stride=2,padding='VALID',data_format=DATA_FORMAT_NHWC,outputs_collections=None,scope=None):

參數說明如下:

  • inputs: A 4-D tensor of shape `[batch_size, height, width, channels]` if`data_format` is `NHWC`, and `[batch_size, channels, height, width]` if `data_format` is `NCHW`.
  • kernel_size: A list of length 2: [kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A list of length 2: [stride_height, stride_width].Can be an int if both strides are the same. Note that presently both strides must have the same value.
  • padding: The padding method, either 'VALID' or 'SAME'.
  • data_format: A string. `NHWC` (default) and `NCHW` are supported.
  • outputs_collections: The collections to which the outputs are added.
  • scope: Optional scope for name_scope.

3.tf.contrib.layers.avg_pool2d()函數定義

?

def avg_pool2d(inputs,kernel_size,stride=2,padding='VALID',data_format=DATA_FORMAT_NHWC,outputs_collections=None,scope=None):

參數說明如下:

  • inputs: A 4-D tensor of shape `[batch_size, height, width, channels]` if`data_format` is `NHWC`, and `[batch_size, channels, height, width]` if `data_format` is `NCHW`.
  • kernel_size: A list of length 2: [kernel_height, kernel_width] of the pooling kernel over which the op is computed. Can be an int if both values are the same.
  • stride: A list of length 2: [stride_height, stride_width].Can be an int if both strides are the same. Note that presently both strides must have the same value.
  • padding: The padding method, either 'VALID' or 'SAME'.
  • data_format: A string. `NHWC` (default) and `NCHW` are supported.
  • outputs_collections: The collections to which the outputs are added.
  • scope: Optional scope for name_scope.

4.tf.contrib.layers.fully_connected()函數的定義如下:

def fully_connected(inputs,num_outputs,activation_fn=nn.relu,normalizer_fn=None,normalizer_params=None,weights_initializer=initializers.xavier_initializer(),weights_regularizer=None,biases_initializer=init_ops.zeros_initializer(),biases_regularizer=None,reuse=None,variables_collections=None,outputs_collections=None,trainable=True,scope=None):

參數說明如下:

  • inputs: A tensor of at least rank 2 and static value for the last dimension; i.e. `[batch_size, depth]`, `[None, None, None, channels]`.
  • num_outputs: Integer or long, the number of output units in the layer.
  • activation_fn: Activation function. The default value is a ReLU function.Explicitly set it to None to skip it and maintain a linear activation.
  • normalizer_fn: Normalization function to use instead of `biases`. If `normalizer_fn` is provided then `biases_initializer` and
  • `biases_regularizer` are ignored and `biases` are not created nor added.default set to None for no normalizer function
  • normalizer_params: Normalization function parameters.
  • weights_initializer: An initializer for the weights.
  • weights_regularizer: Optional regularizer for the weights.
  • biases_initializer: An initializer for the biases. If None skip biases.
  • biases_regularizer: Optional regularizer for the biases.
  • reuse: Whether or not the layer and its variables should be reused. To be able to reuse the layer scope must be given.
  • variables_collections: Optional list of collections for all the variables or a dictionary containing a different list of collections per variable.
  • outputs_collections: Collection to add the outputs.
  • trainable:?If `True` also add variables to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see tf.Variable).如果我們是微調網絡,有時候需要凍結某一層的參數,則設置為False。
  • scope: Optional scope for variable_scope.

?

回到頂部

二 改寫cifar10分類

代碼如下:

# -*- coding: utf-8 -*-
"""
Created on Thu May  3 12:29:16 2018@author: zy
"""'''
建立一個帶有全連接層的卷積神經網絡  并對CIFAR-10數據集進行分類
1.使用2個卷積層的同卷積操作,濾波器大小為5x5,每個卷積層后面都會跟一個步長為2x2的池化層,濾波器大小為2x2
2.對輸出的64個feature map進行全局平均池化,得到64個特征
3.加入一個全連接層,使用softmax激活函數,得到分類
'''import cifar10_input
import tensorflow as tf
import numpy as npdef print_op_shape(t):'''輸出一個操作op節點的形狀'''print(t.op.name,'',t.get_shape().as_list())'''
一 引入數據集
'''
batch_size = 128
learning_rate = 1e-4
training_step = 15000
display_step = 200
#數據集目錄
data_dir = './cifar10_data/cifar-10-batches-bin'
print('begin')
#獲取訓練集數據
images_train,labels_train = cifar10_input.inputs(eval_data=False,data_dir = data_dir,batch_size=batch_size)
print('begin data')'''
二 定義網絡結構
'''#定義占位符
input_x = tf.placeholder(dtype=tf.float32,shape=[None,24,24,3])   #圖像大小24x24x
input_y = tf.placeholder(dtype=tf.float32,shape=[None,10])        #0-9類別 x_image = tf.reshape(input_x,[batch_size,24,24,3])#1.卷積層 ->池化層h_conv1 = tf.contrib.layers.conv2d(inputs=x_image,num_outputs=64,kernel_size=5,stride=1,padding='SAME', activation_fn=tf.nn.relu)    #輸出為[-1,24,24,64]
print_op_shape(h_conv1)
h_pool1 = tf.contrib.layers.max_pool2d(inputs=h_conv1,kernel_size=2,stride=2,padding='SAME')         #輸出為[-1,12,12,64]
print_op_shape(h_pool1)#2.卷積層 ->池化層h_conv2 =tf.contrib.layers.conv2d(inputs=h_pool1,num_outputs=64,kernel_size=[5,5],stride=[1,1],padding='SAME', activation_fn=tf.nn.relu)    #輸出為[-1,12,12,64]
print_op_shape(h_conv2)
h_pool2 =  tf.contrib.layers.max_pool2d(inputs=h_conv2,kernel_size=[2,2],stride=[2,2],padding='SAME')   #輸出為[-1,6,6,64]
print_op_shape(h_pool2)#3全連接層nt_hpool2 = tf.contrib.layers.avg_pool2d(inputs=h_pool2,kernel_size=6,stride=6,padding='SAME')          #輸出為[-1,1,1,64]
print_op_shape(nt_hpool2)
nt_hpool2_flat = tf.reshape(nt_hpool2,[-1,64])            
y_conv = tf.contrib.layers.fully_connected(inputs=nt_hpool2_flat,num_outputs=10,activation_fn=tf.nn.softmax)
print_op_shape(y_conv)'''
三 定義求解器
'''#softmax交叉熵代價函數
cost = tf.reduce_mean(-tf.reduce_sum(input_y * tf.log(y_conv),axis=1))#求解器
train = tf.train.AdamOptimizer(learning_rate).minimize(cost)#返回一個準確度的數據
correct_prediction = tf.equal(tf.arg_max(y_conv,1),tf.arg_max(input_y,1))
#準確率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,dtype=tf.float32))'''
四 開始訓練
'''
sess = tf.Session();
sess.run(tf.global_variables_initializer())
# 啟動計算圖中所有的隊列線程 調用tf.train.start_queue_runners來將文件名填充到隊列,否則read操作會被阻塞到文件名隊列中有值為止。
tf.train.start_queue_runners(sess=sess)for step in range(training_step):#獲取batch_size大小數據集image_batch,label_batch = sess.run([images_train,labels_train])#one hot編碼label_b = np.eye(10,dtype=np.float32)[label_batch]#開始訓練train.run(feed_dict={input_x:image_batch,input_y:label_b},session=sess)if step % display_step == 0:train_accuracy = accuracy.eval(feed_dict={input_x:image_batch,input_y:label_b},session=sess)print('Step {0} tranining accuracy {1}'.format(step,train_accuracy))

?

總結

以上是生活随笔為你收集整理的第十六节,使用函数封装库tf.contrib.layers的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。