日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

基于卷积神经网络的手写数字识别、python实现

發布時間:2025/3/12 python 22 豆豆
生活随笔 收集整理的這篇文章主要介紹了 基于卷积神经网络的手写数字识别、python实现 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

一、CNN網絡結構與構建

參數:

輸入數據的維數,通道,高,長

input_dim=(1, 28, 28)

卷積層的超參數,filter_num:濾波器數量,filter_size:濾波器大小,stride:步幅,pad,填充。

conv_param={'filter_num':30, 'filter_size':5, 'pad':0, 'stride':1}

hidden_size,隱藏層全連接神經元數量;

output_size,輸出層全連接神經元數量;

weight_init_std,初始化時,權重的標準差;

hidden_size=100, output_size=10, weight_init_std=0.01

學習所需參數是第一層的卷積層和剩余兩個全連接層的權重和偏置。權重、偏置,分別用W、b保存。

self.params = {}self.params['W1'] = weight_init_std * \np.random.randn(filter_num, input_dim[0], filter_size, filter_size)self.params['b1'] = np.zeros(filter_num)self.params['W2'] = weight_init_std * \np.random.randn(pool_output_size, hidden_size)self.params['b2'] = np.zeros(hidden_size)self.params['W3'] = weight_init_std * \np.random.randn(hidden_size, output_size)self.params['b3'] = np.zeros(output_size)

代碼:

# coding: utf-8 import sys, os sys.path.append(os.pardir) # 為了導入父目錄的文件而進行的設定 import pickle import numpy as np from collections import OrderedDict from common.layers import * from common.gradient import numerical_gradientclass SimpleConvNet:"""簡單的ConvNetconv - relu - pool - affine - relu - affine - softmaxParameters----------input_size : 輸入大小(MNIST的情況下為784)hidden_size_list : 隱藏層的神經元數量的列表(e.g. [100, 100, 100])output_size : 輸出大小(MNIST的情況下為10)activation : 'relu' or 'sigmoid'weight_init_std : 指定權重的標準差(e.g. 0.01)指定'relu'或'he'的情況下設定“He的初始值”指定'sigmoid'或'xavier'的情況下設定“Xavier的初始值”"""def __init__(self, input_dim=(1, 28, 28), conv_param={'filter_num':30, 'filter_size':5, 'pad':0, 'stride':1},hidden_size=100, output_size=10, weight_init_std=0.01):filter_num = conv_param['filter_num']filter_size = conv_param['filter_size']filter_pad = conv_param['pad']filter_stride = conv_param['stride']input_size = input_dim[1]conv_output_size = (input_size - filter_size + 2*filter_pad) / filter_stride + 1pool_output_size = int(filter_num * (conv_output_size/2) * (conv_output_size/2))# 初始化權重self.params = {}self.params['W1'] = weight_init_std * \np.random.randn(filter_num, input_dim[0], filter_size, filter_size)self.params['b1'] = np.zeros(filter_num)self.params['W2'] = weight_init_std * \np.random.randn(pool_output_size, hidden_size)self.params['b2'] = np.zeros(hidden_size)self.params['W3'] = weight_init_std * \np.random.randn(hidden_size, output_size)self.params['b3'] = np.zeros(output_size)# 生成層self.layers = OrderedDict()self.layers['Conv1'] = Convolution(self.params['W1'], self.params['b1'],conv_param['stride'], conv_param['pad'])self.layers['Relu1'] = Relu()self.layers['Pool1'] = Pooling(pool_h=2, pool_w=2, stride=2)self.layers['Affine1'] = Affine(self.params['W2'], self.params['b2'])self.layers['Relu2'] = Relu()self.layers['Affine2'] = Affine(self.params['W3'], self.params['b3'])self.last_layer = SoftmaxWithLoss()def predict(self, x):for layer in self.layers.values():x = layer.forward(x)return xdef loss(self, x, t):"""求損失函數參數x是輸入數據、t是教師標簽"""y = self.predict(x)return self.last_layer.forward(y, t)def accuracy(self, x, t, batch_size=100):if t.ndim != 1 : t = np.argmax(t, axis=1)acc = 0.0for i in range(int(x.shape[0] / batch_size)):tx = x[i*batch_size:(i+1)*batch_size]tt = t[i*batch_size:(i+1)*batch_size]y = self.predict(tx)y = np.argmax(y, axis=1)acc += np.sum(y == tt) return acc / x.shape[0]def numerical_gradient(self, x, t):"""求梯度(數值微分)Parameters----------x : 輸入數據t : 教師標簽Returns-------具有各層的梯度的字典變量grads['W1']、grads['W2']、...是各層的權重grads['b1']、grads['b2']、...是各層的偏置"""loss_w = lambda w: self.loss(x, t)grads = {}for idx in (1, 2, 3):grads['W' + str(idx)] = numerical_gradient(loss_w, self.params['W' + str(idx)])grads['b' + str(idx)] = numerical_gradient(loss_w, self.params['b' + str(idx)])return gradsdef gradient(self, x, t):"""求梯度(誤差反向傳播法)Parameters----------x : 輸入數據t : 教師標簽Returns-------具有各層的梯度的字典變量grads['W1']、grads['W2']、...是各層的權重grads['b1']、grads['b2']、...是各層的偏置"""# forwardself.loss(x, t)# backwarddout = 1dout = self.last_layer.backward(dout)layers = list(self.layers.values())layers.reverse()for layer in layers:dout = layer.backward(dout)# 設定grads = {}grads['W1'], grads['b1'] = self.layers['Conv1'].dW, self.layers['Conv1'].dbgrads['W2'], grads['b2'] = self.layers['Affine1'].dW, self.layers['Affine1'].dbgrads['W3'], grads['b3'] = self.layers['Affine2'].dW, self.layers['Affine2'].dbreturn gradsdef save_params(self, file_name="params.pkl"):params = {}for key, val in self.params.items():params[key] = valwith open(file_name, 'wb') as f:pickle.dump(params, f)def load_params(self, file_name="params.pkl"):with open(file_name, 'rb') as f:params = pickle.load(f)for key, val in params.items():self.params[key] = valfor i, key in enumerate(['Conv1', 'Affine1', 'Affine2']):self.layers[key].W = self.params['W' + str(i+1)]self.layers[key].b = self.params['b' + str(i+1)]

二、使用CNN網絡學習MNIST數據集并觀察效果

# coding: utf-8 import sys, os sys.path.append(os.pardir) # 為了導入父目錄的文件而進行的設定 import numpy as np import matplotlib.pyplot as plt from dataset.mnist import load_mnist from simple_convnet import SimpleConvNet from common.trainer import Trainer# 讀入數據 (x_train, t_train), (x_test, t_test) = load_mnist(flatten=False)# 處理花費時間較長的情況下減少數據 x_train, t_train = x_train[:5000], t_train[:5000] x_test, t_test = x_test[:1000], t_test[:1000]max_epochs = 20network = SimpleConvNet(input_dim=(1,28,28), conv_param = {'filter_num': 30, 'filter_size': 5, 'pad': 0, 'stride': 1},hidden_size=100, output_size=10, weight_init_std=0.01)trainer = Trainer(network, x_train, t_train, x_test, t_test,epochs=max_epochs, mini_batch_size=100,optimizer='Adam', optimizer_param={'lr': 0.001},evaluate_sample_num_per_epoch=1000) trainer.train()# 保存參數 network.save_params("params.pkl") print("Saved Network Parameters!")# 繪制圖形 markers = {'train': 'o', 'test': 's'} x = np.arange(max_epochs) plt.plot(x, trainer.train_acc_list, marker='o', label='train', markevery=2) plt.plot(x, trainer.test_acc_list, marker='s', label='test', markevery=2) plt.xlabel("epochs") plt.ylabel("accuracy") plt.ylim(0, 1.0) plt.legend(loc='lower right') plt.show()

效果:

=== epoch:20, train acc:0.998, test acc:0.964 === train loss:0.007401310521311435 train loss:0.027614604405772653 train loss:0.010395290280106407 train loss:0.009055562731122933 train loss:0.010614723913692073 train loss:0.012437935682767121 train loss:0.026701506796337437 train loss:0.006204184557258094 train loss:0.010404145189650856 train loss:0.010929826675443866 train loss:0.0043394220957300835 train loss:0.016781798147762927 train loss:0.008747950916926508 train loss:0.022275261048058662 train loss:0.004475751241820642 train loss:0.018634365845167887 train loss:0.010216296159200803 train loss:0.05663255540517016 train loss:0.007190307798334322 train loss:0.05278721478973261 train loss:0.01059534308178735 train loss:0.005966098495078249 train loss:0.010178506340940181 train loss:0.03654597399370525 train loss:0.019495820002866274 train loss:0.01572182958630932 train loss:0.00465907402610126 train loss:0.024876708101982406 train loss:0.005049280179694557 train loss:0.014516301412561905 train loss:0.007808137131314081 train loss:0.0400124952112783 train loss:0.014341889140004867 train loss:0.007797598015128371 train loss:0.02575987545665843 train loss:0.08519312577327812 train loss:0.021226771077661372 train loss:0.004566285129776959 train loss:0.014989958271037414 train loss:0.015107332850379906 train loss:0.017502483559764623 train loss:0.008879393649119861 train loss:0.013326281023352782 train loss:0.021570154414811325 train loss:0.010967106033868279 train loss:0.039365545329473575 train loss:0.03687669299007644 train loss:0.005511731850415302 train loss:0.005646337734962965 =============== Final Test Accuracy =============== test acc:0.963

測試數據識別率百分之九十多,可見識別率還是挺不錯的。

總結

以上是生活随笔為你收集整理的基于卷积神经网络的手写数字识别、python实现的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。