日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人文社科 > 生活经验 >内容正文

生活经验

深度学习--TensorFlow(项目)Keras手写数字识别

發(fā)布時(shí)間:2023/11/27 生活经验 18 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习--TensorFlow(项目)Keras手写数字识别 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

目錄

效果展示

基礎(chǔ)理論

1、softmax激活函數(shù)

2、神經(jīng)網(wǎng)絡(luò)

3、隱藏層及神經(jīng)元最佳數(shù)量

?一、數(shù)據(jù)準(zhǔn)備

1、載入數(shù)據(jù)集

2、數(shù)據(jù)處理

2-1、歸一化

2-2、獨(dú)熱編碼

二、神經(jīng)網(wǎng)絡(luò)擬合

1、搭建神經(jīng)網(wǎng)絡(luò)

2、設(shè)置優(yōu)化器、損失函數(shù)

3、訓(xùn)練

三、預(yù)測

1、備份圖像數(shù)據(jù)集

2、預(yù)測分類

3、顯示結(jié)果(plt)

總代碼


效果展示

基礎(chǔ)理論

本次手寫數(shù)字識別,采用的是MNIST數(shù)據(jù)集。 http://yann.lecun.com/exdb/mnist/?

1、softmax激活函數(shù)

這里輸出層用到了softmax激活函數(shù),把輸出的數(shù)據(jù)轉(zhuǎn)化成概率:

2、神經(jīng)網(wǎng)絡(luò)

神經(jīng)網(wǎng)絡(luò)原型:

28*28=784個輸入像素對應(yīng)784個輸入神經(jīng)元;10個輸出神經(jīng)元分別對應(yīng)0~9的十個數(shù)。

(為了提高訓(xùn)練的準(zhǔn)確度,我加了一些隱藏層 )

3、隱藏層及神經(jīng)元最佳數(shù)量

????????隱藏層、神經(jīng)元最佳數(shù)量需要自己不斷實(shí)驗(yàn)獲得,先選取小一點(diǎn)的數(shù)據(jù),欠擬合,再不斷增加數(shù)據(jù),直到最佳數(shù)據(jù)出現(xiàn)(即將出現(xiàn)過擬合情況)

通常:

?一、數(shù)據(jù)準(zhǔn)備

# 數(shù)據(jù)準(zhǔn)備
def Data_Preparation():global train_data, train_label, test_data, test_label, mnist_train, \mnist_test, images, labels

1、載入數(shù)據(jù)集

# 1、載入數(shù)據(jù)集mnist = tf.keras.datasets.mnist(train_data, train_label), (test_data, test_label) = mnist.load_data()# 訓(xùn)練集數(shù)據(jù) train_data 的數(shù)據(jù)形狀為(60000,28,28)# 訓(xùn)練集標(biāo)簽 train_label 的數(shù)據(jù)形狀為(60000)# 測試集數(shù)據(jù) test_data 的數(shù)據(jù)形狀為(10000,28,28)# 測試集標(biāo)簽 test_label 的數(shù)據(jù)形狀為(10000)images, labels = test_data, test_label

2、數(shù)據(jù)處理

2-1、歸一化

# 2-1、數(shù)據(jù)集歸一化train_data = train_data/255test_data = test_data/255

2-2、獨(dú)熱編碼

0~9分別用10個二進(jìn)制數(shù)中1的位置表示它們的值。

# 2-2、標(biāo)簽獨(dú)熱編碼train_label = tf.keras.utils.to_categorical(train_label, num_classes=10)test_label = tf.keras.utils.to_categorical(test_label, num_classes=10)#                           轉(zhuǎn)化為獨(dú)熱編碼   待轉(zhuǎn)數(shù)據(jù)      獨(dú)熱碼長度

二、神經(jīng)網(wǎng)絡(luò)擬合

1、搭建神經(jīng)網(wǎng)絡(luò)

輸入層(數(shù)據(jù)展平:(28,28)->(784))、隱藏層、輸出層(10個神經(jīng)元對應(yīng)10個數(shù)字)。

# 1、創(chuàng)建神經(jīng)網(wǎng)絡(luò)model = tf.keras.models.Sequential([tf.keras.layers.Flatten(input_shape=(28, 28)),   # 輸入數(shù)據(jù)壓縮,三維變二維# (60000,28,28) -> (60000, 784)tf.keras.layers.Dense(200 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(200 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(200 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(100 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(100 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(100 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(10, activation='softmax')  # 全連接層#                       輸出神經(jīng)元數(shù)量  激活函數(shù)(softmax)])

隱藏層數(shù)和神經(jīng)元數(shù)量非固定的,自己測試之后添加的。

添加隱層前:?

?添加隱層后;

2、設(shè)置優(yōu)化器、損失函數(shù)

# 2、設(shè)置優(yōu)化器、損失函數(shù)model.compile(optimizer=SGD(0.3), loss='mse', metrics=['accuracy'])#             優(yōu)化器     學(xué)習(xí)率0.3  損失函數(shù)(均方誤差) 保留標(biāo)簽(accuracy)

目前測試的最佳學(xué)習(xí)率:0.3(本模型)

3、訓(xùn)練

# 3、訓(xùn)練model.fit(train_data, train_label, epochs=20, batch_size=32, validation_data=(test_data, test_label))
#             訓(xùn)練集                    遍歷20次    一組32個        測試集

三、預(yù)測

1、備份圖像數(shù)據(jù)集

這里準(zhǔn)備了2個圖像數(shù)據(jù)集,一份用于后面的預(yù)測分類,一份用于顯示圖像。

為什么不用一份呢?因?yàn)閮烧呔S度不同,預(yù)測的需要更高一維度的數(shù)據(jù)。?

# 圖像增加維度Images = images[:, np.newaxis]# images圖像正常顯示,Images圖像用來做預(yù)測

2、預(yù)測分類

# 預(yù)測分類classification = model.predict(Images[i], batch_size=10)# 得到結(jié)果result.append(np.argmax(classification[0]))

3、顯示結(jié)果(plt)

# 顯示結(jié)果x = int(i/3)y = i%3ax[x][y].set_title(f'label:{labels[i]}-predict:{result[i]}')    # 設(shè)置標(biāo)題ax[x][y].imshow(images[i], 'gray')                              # 顯示圖像ax[x][y].axis('off')                                            # 隱藏坐標(biāo)軸

訓(xùn)練1次:?

?訓(xùn)練30次:

訓(xùn)練了30次的情況:?

?D:\Software\Python\Python\python.exe D:/Study/AI/OpenCV/draft.py/main.py
Epoch 1/30
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0393 - accuracy: 0.7064 - val_loss: 0.0114 - val_accuracy: 0.9270
Epoch 2/30
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0105 - accuracy: 0.9310 - val_loss: 0.0075 - val_accuracy: 0.9498
Epoch 3/30
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0072 - accuracy: 0.9538 - val_loss: 0.0062 - val_accuracy: 0.9603
Epoch 4/30
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0057 - accuracy: 0.9635 - val_loss: 0.0055 - val_accuracy: 0.9658
Epoch 5/30
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0047 - accuracy: 0.9703 - val_loss: 0.0048 - val_accuracy: 0.9691
Epoch 6/30
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0040 - accuracy: 0.9746 - val_loss: 0.0048 - val_accuracy: 0.9694
Epoch 7/30
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0034 - accuracy: 0.9787 - val_loss: 0.0049 - val_accuracy: 0.9669
Epoch 8/30
1875/1875 [==============================] - 5s 3ms/step - loss: 0.0030 - accuracy: 0.9816 - val_loss: 0.0043 - val_accuracy: 0.9713
Epoch 9/30
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0026 - accuracy: 0.9844 - val_loss: 0.0038 - val_accuracy: 0.9760
Epoch 10/30
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0023 - accuracy: 0.9864 - val_loss: 0.0050 - val_accuracy: 0.9677
Epoch 11/30
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0021 - accuracy: 0.9873 - val_loss: 0.0037 - val_accuracy: 0.9764
Epoch 12/30
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0018 - accuracy: 0.9894 - val_loss: 0.0037 - val_accuracy: 0.9758
Epoch 13/30
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0017 - accuracy: 0.9901 - val_loss: 0.0041 - val_accuracy: 0.9734
Epoch 14/30
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0015 - accuracy: 0.9911 - val_loss: 0.0045 - val_accuracy: 0.9708
Epoch 15/30
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0014 - accuracy: 0.9921 - val_loss: 0.0038 - val_accuracy: 0.9760
Epoch 16/30
1875/1875 [==============================] - 7s 4ms/step - loss: 0.0013 - accuracy: 0.9922 - val_loss: 0.0036 - val_accuracy: 0.9764
Epoch 17/30
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0012 - accuracy: 0.9932 - val_loss: 0.0035 - val_accuracy: 0.9771
Epoch 18/30
1875/1875 [==============================] - 6s 3ms/step - loss: 0.0011 - accuracy: 0.9942 - val_loss: 0.0034 - val_accuracy: 0.9783
Epoch 19/30
1875/1875 [==============================] - 7s 4ms/step - loss: 9.5588e-04 - accuracy: 0.9947 - val_loss: 0.0034 - val_accuracy: 0.9788
Epoch 20/30
1875/1875 [==============================] - 6s 3ms/step - loss: 9.9405e-04 - accuracy: 0.9942 - val_loss: 0.0037 - val_accuracy: 0.9767
Epoch 21/30
1875/1875 [==============================] - 7s 4ms/step - loss: 8.7466e-04 - accuracy: 0.9952 - val_loss: 0.0037 - val_accuracy: 0.9757
Epoch 22/30
1875/1875 [==============================] - 7s 4ms/step - loss: 7.5603e-04 - accuracy: 0.9959 - val_loss: 0.0037 - val_accuracy: 0.9773
Epoch 23/30
1875/1875 [==============================] - 7s 4ms/step - loss: 7.8775e-04 - accuracy: 0.9955 - val_loss: 0.0035 - val_accuracy: 0.9783
Epoch 24/30
1875/1875 [==============================] - 7s 4ms/step - loss: 7.2748e-04 - accuracy: 0.9961 - val_loss: 0.0034 - val_accuracy: 0.9779
Epoch 25/30
1875/1875 [==============================] - 7s 4ms/step - loss: 7.2780e-04 - accuracy: 0.9959 - val_loss: 0.0036 - val_accuracy: 0.9777
Epoch 26/30
1875/1875 [==============================] - 6s 3ms/step - loss: 5.9373e-04 - accuracy: 0.9969 - val_loss: 0.0032 - val_accuracy: 0.9792
Epoch 27/30
1875/1875 [==============================] - 6s 3ms/step - loss: 5.6153e-04 - accuracy: 0.9970 - val_loss: 0.0034 - val_accuracy: 0.9786
Epoch 28/30
1875/1875 [==============================] - 7s 4ms/step - loss: 5.7011e-04 - accuracy: 0.9969 - val_loss: 0.0033 - val_accuracy: 0.9792
Epoch 29/30
1875/1875 [==============================] - 7s 4ms/step - loss: 5.0371e-04 - accuracy: 0.9973 - val_loss: 0.0037 - val_accuracy: 0.9767
Epoch 30/30
1875/1875 [==============================] - 7s 4ms/step - loss: 5.2224e-04 - accuracy: 0.9972 - val_loss: 0.0033 - val_accuracy: 0.9795

可以發(fā)現(xiàn),訓(xùn)練次數(shù)過高,后面變化就已經(jīng)不大了。?

總代碼

# keras手寫數(shù)字識別
import os
os.environ['TF_CPP_MIN_LOG_LEVEL']='2'import tensorflow as tf
from tensorflow.keras.optimizers import SGD
import numpy as np
import matplotlib.pyplot as plt# 數(shù)據(jù)準(zhǔn)備
def Data_Preparation():global train_data, train_label, test_data, test_label, mnist_train, \mnist_test, images, labels# 1、載入數(shù)據(jù)集mnist = tf.keras.datasets.mnist(train_data, train_label), (test_data, test_label) = mnist.load_data()# 訓(xùn)練集數(shù)據(jù) train_data 的數(shù)據(jù)形狀為(60000,28,28)# 訓(xùn)練集標(biāo)簽 train_label 的數(shù)據(jù)形狀為(60000)# 測試集數(shù)據(jù) test_data 的數(shù)據(jù)形狀為(10000,28,28)# 測試集標(biāo)簽 test_label 的數(shù)據(jù)形狀為(10000)images, labels = test_data, test_label# 2、數(shù)據(jù)處理(歸一化、獨(dú)熱編碼)# 2-1、數(shù)據(jù)集歸一化train_data = train_data/255test_data = test_data/255# 2-2、標(biāo)簽獨(dú)熱編碼train_label = tf.keras.utils.to_categorical(train_label, num_classes=10)test_label = tf.keras.utils.to_categorical(test_label, num_classes=10)#                           轉(zhuǎn)化為獨(dú)熱編碼   待轉(zhuǎn)數(shù)據(jù)      獨(dú)熱碼長度# 神經(jīng)網(wǎng)絡(luò)擬合
def Neural_Network():global model# 1、創(chuàng)建神經(jīng)網(wǎng)絡(luò)model = tf.keras.models.Sequential([#tf.keras.layers.Flatten(input_shape=(28, 28)),   # 輸入數(shù)據(jù)壓縮,三維變二維# (60000,28,28) -> (60000, 784)tf.keras.layers.Dense(200 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(200 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(200 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(100 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(100 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(100 + 1, activation=tf.nn.relu),tf.keras.layers.Dense(10, activation='softmax')  # 全連接層#                       輸出神經(jīng)元數(shù)量  激活函數(shù)(softmax)])# 2、設(shè)置優(yōu)化器、損失函數(shù)model.compile(optimizer=SGD(0.3), loss='mse', metrics=['accuracy'])#             優(yōu)化器     學(xué)習(xí)率0.3  損失函數(shù)(均方誤差) 保留標(biāo)簽(accuracy)# 3、訓(xùn)練model.fit(train_data, train_label, epochs=20, batch_size=32, validation_data=(test_data, test_label))
#             訓(xùn)練集                    遍歷20次    一組32個        測試集# 預(yù)測手寫數(shù)字(可視化部分測試集的預(yù)測)
def Predict():global images# 預(yù)測結(jié)果result = []# plt圖表f, ax = plt.subplots(3, 3, figsize=(8, 6))# 圖像增加維度Images = images[:, np.newaxis]# images圖像正常顯示,Images圖像用來做預(yù)測# 在測試集中選取一部分?jǐn)?shù)據(jù)進(jìn)行可視化分類for i in range(9):# 預(yù)測分類classification = model.predict(Images[i], batch_size=10)# 得到結(jié)果result.append(np.argmax(classification[0]))# 顯示結(jié)果x = int(i/3)y = i%3ax[x][y].set_title(f'label:{labels[i]}-predict:{result[i]}')    # 設(shè)置標(biāo)題ax[x][y].imshow(images[i], 'gray')                              # 顯示圖像ax[x][y].axis('off')                                            # 隱藏坐標(biāo)軸plt.show()# 數(shù)據(jù)準(zhǔn)備
Data_Preparation()
# 神經(jīng)網(wǎng)絡(luò)搭建
Neural_Network()
# 手寫數(shù)字預(yù)測
Predict()

總結(jié)

以上是生活随笔為你收集整理的深度学习--TensorFlow(项目)Keras手写数字识别的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。