tensorflow综合示例7:LeNet-5实现mnist识别
生活随笔
收集整理的這篇文章主要介紹了
tensorflow综合示例7:LeNet-5实现mnist识别
小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
在本文中,我們使用tensorflow2.x實(shí)現(xiàn)了lenet-5,用于mnist的識(shí)別。
import numpy as np import matplotlib.pyplot as plt import pandas as pd import tensorflow as tf from tensorflow import keras數(shù)據(jù)預(yù)處理
我們先載入mnist數(shù)據(jù)
(x_train, y_train),(x_test, y_test) = keras.datasets.mnist.load_data()我們把特征數(shù)據(jù)增加一個(gè)緯度,用于LeNet5的輸入:
print(x_train.shape, y_train.shape) x_train = x_train.reshape(60000, 28, 28, 1) x_test = x_test.reshape(10000, 28, 28, 1) print(x_train.shape, y_train.shape) (60000, 28, 28) (60000,) (60000, 28, 28, 1) (60000,)特征數(shù)據(jù)歸一化:
x_train = x_train/255.0 x_test = x_test/255.0標(biāo)簽做onehot:
y_train = np.array(pd.get_dummies(y_train)) y_test = np.array(pd.get_dummies(y_test))構(gòu)建模型
我們使用sequential構(gòu)建LeNet-5模型:
model = keras.models.Sequential() model.add(tf.keras.layers.Conv2D(filters=6, kernel_size=(5,5), input_shape=(28,28,1), padding='same', activation='sigmoid')) model.add(tf.keras.layers.AveragePooling2D(pool_size=(2,2))) model.add(tf.keras.layers.Conv2D(filters=16, kernel_size=(5,5), padding='valid', activation='sigmoid')) model.add(tf.keras.layers.AveragePooling2D(pool_size=(2,2))) model.add(tf.keras.layers.Conv2D(filters=120, kernel_size=(5,5), padding='valid', activation='sigmoid')) model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(84, activation='sigmoid')) model.add(tf.keras.layers.Dense(10, activation='softmax'))我們看一下模型的詳細(xì)情況,包括每一層的輸出大小,可訓(xùn)練參數(shù)數(shù)量,模型的總參數(shù)等。
model.summary() Model: "sequential_9" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d_14 (Conv2D) (None, 28, 28, 6) 156 _________________________________________________________________ average_pooling2d_10 (Averag (None, 14, 14, 6) 0 _________________________________________________________________ conv2d_15 (Conv2D) (None, 10, 10, 16) 2416 _________________________________________________________________ average_pooling2d_11 (Averag (None, 5, 5, 16) 0 _________________________________________________________________ conv2d_16 (Conv2D) (None, 1, 1, 120) 48120 _________________________________________________________________ flatten_2 (Flatten) (None, 120) 0 _________________________________________________________________ dense_3 (Dense) (None, 84) 10164 _________________________________________________________________ dense_4 (Dense) (None, 10) 850 ================================================================= Total params: 61,706 Trainable params: 61,706 Non-trainable params: 0 _________________________________________________________________ model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['acc']) print(x_train.shape, y_train.shape, x_test.shape, y_test.shape) (60000, 28, 28, 1) (60000, 10) (10000, 28, 28, 1) (10000, 10)訓(xùn)練模型
history = model.fit(x_train, y_train, validation_data=(x_test, y_test), epochs=10) Epoch 1/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0638 - acc: 0.9805 - val_loss: 0.0618 - val_acc: 0.9801 Epoch 2/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0548 - acc: 0.9832 - val_loss: 0.0515 - val_acc: 0.9830 Epoch 3/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0480 - acc: 0.9851 - val_loss: 0.0727 - val_acc: 0.9763 Epoch 4/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0431 - acc: 0.9870 - val_loss: 0.0420 - val_acc: 0.9864 Epoch 5/10 1875/1875 [==============================] - 9s 5ms/step - loss: 0.0390 - acc: 0.9881 - val_loss: 0.0461 - val_acc: 0.9851 Epoch 6/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0347 - acc: 0.9889 - val_loss: 0.0394 - val_acc: 0.9866 Epoch 7/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0309 - acc: 0.9904 - val_loss: 0.0434 - val_acc: 0.9851 Epoch 8/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0279 - acc: 0.9908 - val_loss: 0.0373 - val_acc: 0.9879 Epoch 9/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0257 - acc: 0.9919 - val_loss: 0.0353 - val_acc: 0.9886 Epoch 10/10 1875/1875 [==============================] - 10s 5ms/step - loss: 0.0229 - acc: 0.9930 - val_loss: 0.0361 - val_acc: 0.9876準(zhǔn)確率可以打到98%以上。
保存模型
model.save('mnist.h5')加載模型&預(yù)測(cè)
我們使用上面的模型對(duì)手寫數(shù)字進(jìn)行預(yù)測(cè)
import cv2 img = cv2.imread('3.png', 0) plt.imshow(img) <matplotlib.image.AxesImage at 0x7f602c0a07f0> img = cv2.resize(img, (28,28)) img = img.reshape(1, 28, 28, 1) img = img/255.0 my_model = tf.keras.models.load_model('mnist.h5') predict = my_model.predict(img) print(predict) print(np.argmax(predict)) [[4.9507696e-09 7.0097293e-08 1.7773251e-06 9.9997258e-01 3.6114369e-091.9603556e-05 2.9246516e-10 1.3854858e-06 7.9077779e-07 3.7732302e-06]] 3總結(jié)
以上是生活随笔為你收集整理的tensorflow综合示例7:LeNet-5实现mnist识别的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: tensorflow综合示例1:tens
- 下一篇: opencv之图象裁剪