日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪(fǎng)問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

TensorFlow学习笔记-实现经典LeNet5模型(转载)

發(fā)布時(shí)間:2025/3/20 编程问答 15 豆豆
生活随笔 收集整理的這篇文章主要介紹了 TensorFlow学习笔记-实现经典LeNet5模型(转载) 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

LeNet5模型是Yann LeCun教授于1998年提出來(lái)的,它是第一個(gè)成功應(yīng)用于數(shù)字識(shí)別問(wèn)題的卷積神經(jīng)網(wǎng)絡(luò)。在MNIST數(shù)據(jù)中,它的準(zhǔn)確率達(dá)到大約99.2%.
  通過(guò)TensorFlow實(shí)現(xiàn)的LeNet5模型,主要用到在說(shuō)使用變量管理,可以增加代碼可讀性、降低代碼冗余量,提高編程效率,更方便管理變量。我們將LeNet5模型分為三部分:
  1、網(wǎng)絡(luò)定義部分:這部分是訓(xùn)練和驗(yàn)證都需要的網(wǎng)絡(luò)結(jié)構(gòu)。
  2、訓(xùn)練部分:用于神經(jīng)網(wǎng)絡(luò)訓(xùn)練MNIST訓(xùn)練集。
  3、驗(yàn)證部分:驗(yàn)證訓(xùn)練模型的準(zhǔn)確率,在Tensorflow訓(xùn)練過(guò)程中,可以實(shí)時(shí)驗(yàn)證模型的正確率。
  將訓(xùn)練部分與驗(yàn)證部分分開(kāi)的好處在于,訓(xùn)練部分可以持續(xù)輸出訓(xùn)練好的模型,驗(yàn)證部分可以每隔一段時(shí)間驗(yàn)證模型的準(zhǔn)確率;如果模型不好,則需要及時(shí)調(diào)整網(wǎng)絡(luò)結(jié)構(gòu)的參數(shù)。

一、 網(wǎng)絡(luò)定義部分

import tensorflow as tf INPUT_NODE = 784 OUTPUT_NODE = 10 IMAGE_SIZE = 28 NUM_CHANNEL = 1 NUM_LABEL = 10# LAYER1 CONV1_DEEP = 32 CONV1_SIZE = 5# LAYER2 CONV2_DEEP = 64 CONV2_SIZE = 5# 全連接層 FC_SIZE = 512 # LAYER1_NODE = 500def interence(input_tensor,train,regularizer):with tf.variable_scope('layer1-conv'):w = tf.get_variable('w', [CONV1_SIZE,CONV1_SIZE,NUM_CHANNEL,CONV1_DEEP],initializer=tf.truncated_normal_initializer(stddev=0.1))b = tf.get_variable('b',shape=[CONV1_DEEP],initializer=tf.constant_initializer(0.0))# filter shape is :[filter_height, filter_width, in_channels, out_channels]# input tensor shape is:[batch, in_height, in_width, in_channels]# `strides = [1, stride, stride, 1]`.# return [batch, height, width, channels].conv1 = tf.nn.conv2d(input_tensor,w,strides=[1,1,1,1],padding='SAME')relu1 = tf.nn.relu(tf.nn.bias_add(conv1,b))with tf.variable_scope('layer2-pool'):pool1 = tf.nn.max_pool(relu1,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')with tf.variable_scope('layer3-conv'):w = tf.get_variable('w', [CONV2_SIZE, CONV2_SIZE, CONV1_DEEP, CONV2_DEEP],initializer=tf.truncated_normal_initializer(stddev=0.1))b = tf.get_variable('b',shape=[CONV2_DEEP],initializer=tf.constant_initializer(0.0))conv2 = tf.nn.conv2d(pool1, w, strides=[1, 1, 1, 1], padding='SAME')relu2 = tf.nn.relu(tf.nn.bias_add(conv2, b))with tf.variable_scope('layer4-pool'):# pool2 size is [batch_size,7,7,64]pool2 = tf.nn.max_pool(relu2,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')# 接下來(lái)是全連接層,需要將pool2轉(zhuǎn)換為一維向量,作為后面的輸入pool_shape = pool2.get_shape().as_list()nodes = pool_shape[1] * pool_shape[2] * pool_shape[3]reshaped = tf.reshape(pool2,[-1,nodes])# reshaped = tf.reshape(pool2,[BATCH_SIZE,-1])# print(reshaped.get_shape())with tf.variable_scope('layer5-fc1'):fc1_w = tf.get_variable('w',shape=[nodes,FC_SIZE],initializer=tf.truncated_normal_initializer(stddev=0.1))try:# 只有全連接層的權(quán)重需要加入正則化if regularizer != None:tf.add_to_collection('loss',regularizer(fc1_w))except:passfc1_b = tf.get_variable('b',shape=[FC_SIZE],initializer=tf.constant_initializer(0.1))fc1 = tf.nn.relu(tf.matmul(reshaped,fc1_w) + fc1_b)# 使用Dropout隨機(jī)將部分節(jié)點(diǎn)的輸出改為0,為了防止過(guò)擬合的現(xiàn)象,從而使模型在測(cè)試數(shù)據(jù)中表現(xiàn)更好。# dropout一般只會(huì)在全連接層使用。if train:fc1 = tf.nn.dropout(fc1,0.5)with tf.variable_scope('layer6-fc2'):fc2_w = tf.get_variable('w', shape=[FC_SIZE, NUM_LABEL], initializer=tf.truncated_normal_initializer(stddev=0.1))try:if regularizer != None:tf.add_to_collection('loss', regularizer(fc2_w))except:passfc2_b = tf.get_variable('b', shape=[NUM_LABEL], initializer=tf.constant_initializer(0.1))# 最后一層的輸出,不需要加入激活函數(shù)logit = tf.matmul(fc1, fc2_w) + fc2_breturn logit

二、訓(xùn)練部分

import os import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from mnist_cnn import mnist_interence import numpy as np BATCH_SIZE = 100LEARNING_RATE_BASE = 0.8LEARNING_RATE_DECAY = 0.99REGULARIZATION_TATE = 0.0001MOVING_AVERAGE_DECAY = 0.99TRAIN_STEP = 300000MODEL_PATH = 'model' MODEL_NAME = 'model'def train(mnist):x = tf.placeholder(tf.float32, shape=[None,mnist_interence.IMAGE_SIZE,mnist_interence.IMAGE_SIZE,mnist_interence.NUM_CHANNEL ], name='x-input')y_ = tf.placeholder(tf.float32, shape=[None, mnist_interence.OUTPUT_NODE], name='y-input')regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_TATE)y = mnist_interence.interence(x,True,regularizer)global_step = tf.Variable(0, trainable=False)variable_average = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)variable_average_ops = variable_average.apply(tf.trainable_variables())cross_entroy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=y, labels=tf.argmax(y_, 1))cross_entroy_mean = tf.reduce_mean(cross_entroy)loss = cross_entroy_mean + tf.add_n(tf.get_collection('loss'))learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE, global_step,mnist.train.num_examples / BATCH_SIZE, LEARNING_RATE_DECAY)train_step = tf.train.GradientDescentOptimizer(0.01).minimize(loss, global_step=global_step)train_op = tf.group(train_step, variable_average_ops)saver = tf.train.Saver()with tf.Session() as sess:tf.global_variables_initializer().run()for i in range(TRAIN_STEP):# 由于神經(jīng)網(wǎng)絡(luò)的輸入大小為[BATCH_SIZE,IMAGE_SIZE,IMAGE_SIZE,CHANNEL],因此需要reshape輸入。xs,ys = mnist.train.next_batch(BATCH_SIZE)reshape_xs = np.reshape(xs,(BATCH_SIZE, mnist_interence.IMAGE_SIZE,mnist_interence.IMAGE_SIZE,mnist_interence.NUM_CHANNEL))# print(type(xs))_,loss_value,step,learn_rate = sess.run([train_op,loss,global_step,learning_rate],feed_dict={x:reshape_xs,y_:ys})if i % 1000 == 0:print('After %d step, loss on train is %g,and learn rate is %g'%(step,loss_value,learn_rate))saver.save(sess,os.path.join(MODEL_PATH,MODEL_NAME),global_step=global_step)def main():mnist = input_data.read_data_sets('../mni_data', one_hot=True)# ys = mnist.validation.labels# print(ys)train(mnist) if __name__ == '__main__':main()

驗(yàn)證部分

import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data from mnist_cnn import mnist_interence from mnist_cnn import mnist_train EVAL_INTERVAL_SECS = 10 BATCH_SIZE = 100 import time import numpy as np def evaluate(mnist):with tf.Graph().as_default():x = tf.placeholder(tf.float32, shape=[None,mnist_interence.IMAGE_SIZE,mnist_interence.IMAGE_SIZE,mnist_interence.NUM_CHANNEL], name='x-input')y_ = tf.placeholder(tf.float32, shape=[None,mnist_interence.OUTPUT_NODE], name='y-input')xs, ys = mnist.validation.images, mnist.validation.labelsreshape_xs = np.reshape(xs, (-1, mnist_interence.IMAGE_SIZE,mnist_interence.IMAGE_SIZE,mnist_interence.NUM_CHANNEL))print(mnist.validation.labels[0])val_feed = {x: reshape_xs, y_: mnist.validation.labels}y = mnist_interence.interence(x,False,None)correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))variable_average = tf.train.ExponentialMovingAverage(mnist_train.MOVING_AVERAGE_DECAY)val_to_restore = variable_average.variables_to_restore()saver = tf.train.Saver(val_to_restore)while True:with tf.Session() as sess:ckpt = tf.train.get_checkpoint_state(mnist_train.MODEL_PATH)if ckpt and ckpt.model_checkpoint_path:saver.restore(sess,ckpt.model_checkpoint_path)global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]accuracy_score = sess.run(accuracy,feed_dict=val_feed)print('After %s train ,the accuracy is %g'%(global_step,accuracy_score))else:print('No Checkpoint file find')# continuetime.sleep(EVAL_INTERVAL_SECS)def main():mnist = input_data.read_data_sets('../mni_data',one_hot=True)evaluate(mnist)if __name__ == '__main__':main()

  最后,在MNIST數(shù)據(jù)集中的準(zhǔn)確率大約在99.4%左右

總結(jié)

以上是生活随笔為你收集整理的TensorFlow学习笔记-实现经典LeNet5模型(转载)的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。