日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程语言 > python >内容正文

python

python分类预测降低准确率_python实现吴恩达机器学习练习3(多元分类器和神经网络)...

發布時間:2024/4/18 python 39 豆豆
生活随笔 收集整理的這篇文章主要介紹了 python分类预测降低准确率_python实现吴恩达机器学习练习3(多元分类器和神经网络)... 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

Programming Exercise 3:

Multi-class Classification and Neural Networks

吳恩達機器學習教程練習3,練習數據是5000個手寫數字(0-9)圖片,每個圖片分辨率為20*20像素,本次練習有兩個任務:(1)用課程數據建立一個基于邏輯回歸的多元分類器模型,對應0-9數字。(2)用課程給的神經網絡參數進行前向傳播運算。

import numpy as np

import pandas as pd

import matplotlib.pyplot as plt

import scipy.io as io

import scipy.misc

import scipy.optimize as opt

1 Multi-class Classification

1.1 dataset

data = io.loadmat('D:/python/practise/sample/machine-learning-ex3/data/ex3data1.mat')

X, y = data['X'], data['y']

X = np.insert(X, 0, 1, axis = 1) # 加入x0 = 1

print('X shape : {}'.format(X.shape))

print('y shape : {}'.format(y.shape))

X shape : (5000, 401)

y shape : (5000, 1)

np.unique(y)

array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], dtype=uint8)

1.2 visualizing the data

#先顯示一個數

def show_1_number(num):

testImgarr = X[num,1:].reshape(20, 20).T

testImgPIL = scipy.misc.toimage(testImgarr)

plt.figure(figsize = (3, 3))

plt.imshow(testImgPIL)

show_1_number(1251)

參考文章:https://blog.csdn.net/Cowry5/article/details/80367832

# 完全參照Cowry5師傅的方法,自己實在懶得寫了

def plot_100_image(X): #隨機畫100個數字

sample_idx = np.random.choice(np.arange(X.shape[0]), 100) # 隨機選100個樣本

sample_images = X[sample_idx, :] # (100,400)

fig, ax_array = plt.subplots(nrows=10, ncols=10, sharey=True, sharex=True, figsize=(8, 8))

for row in range(10):

for column in range(10):

ax_array[row, column].matshow(sample_images[10 * row + column].reshape((20, 20)).T, cmap='gray_r')

plt.xticks([])

plt.yticks([])

plt.show()

plot_100_image(X[:, 1:])

1.3 Vectorizing Logistic Regression

1.3.3 vectorizing regularized logistic regression

sigmoid函數的兩種方法

def sigmoid(z):

sigmoid = 1 / (1 + np.exp(-z))

return sigmoid

from scipy.special import expit

def sigmoid_2(z):

return expit(z)

邏輯回歸cost function:J(θ)=?1m[∑mi=1y(i)lnhθ(x(i))+(1?y(i))ln(1?hθ(x(i)))]+λ2m∑nj=1θ2jJ(θ)=?1m[∑i=1my(i)ln?hθ(x(i))+(1?y(i))ln?(1?hθ(x(i)))]+λ2m∑j=1nθj2J(\theta)=-\frac{1}{m}[\sum^m_{i=1}y^{(i)}\ln{h_{\theta}}(x^{(i)})+(1-y^{(i)})\ln{(1-h_{\theta}(x^{(i)})})]+\frac{\lambda}{2m}\sum^n_{j=1}\theta_j^2J(θ)=?m1?[i=1∑m?y(i)lnhθ?(x(i))+(1?y(i))ln(1?hθ?(x(i)))]+2mλ?j=1∑n?θj2?

def J_function(theta, X, y):

cost = -y * np.log(sigmoid(X.dot(theta.T))) - (1-y) * np.log(1-sigmoid(X.dot(theta.T)))

J = cost.mean()

return J

def J_function_reg(theta, X, y, c=1):

_theta = theta[1:]

reg = (c/2*len(X)) * (_theta.dot(_theta.T))

return J_function(theta, X, y) + reg

改變梯度下降公式Gradient descent(因為θ0θ0\theta_0θ0?不需要正則化,所以):repeat{repeat{repeat\{repeat{ θ0:=θ0?α1m∑mi=1[hθ(x(i))?y(i)]x(i)0θ0:=θ0?α1m∑i=1m[hθ(x(i))?y(i)]x0(i)\theta_0 := \theta_0-\alpha\frac{1}{m}\sum_{i=1}^m[h_{\theta}(x^{(i)})-y^{(i)}]x_0^{(i)}θ0?:=θ0??αm1?i=1∑m?[hθ?(x(i))?y(i)]x0(i)? θj:=θj?α{1m∑mi=1[hθ(x(i))?y(i)]x(i)j+λmθj}θj:=θj?α{1m∑i=1m[hθ(x(i))?y(i)]xj(i)+λmθj}\theta_j := \theta_j-\alpha\{\frac{1}{m}\sum_{i=1}^m[h_{\theta}(x^{(i)})-y^{(i)}]x_j^{(i)}+\frac{\lambda}{m}\theta_j\}θj?:=θj??α{m1?i=1∑m?[hθ?(x(i))?y(i)]xj(i)?+mλ?θj?} }}\}}

gradient項變為:??θjJ(θ)=1m∑mi=1(hθ(x(i))?y(i))x(i)j+λmθj??θjJ(θ)=1m∑i=1m(hθ(x(i))?y(i))xj(i)+λmθj\frac{\partial}{\partial\theta_j}J(\theta)=\frac{1}{m}\sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})x_j^{(i)}+\frac{\lambda}{m}\theta_j?θj???J(θ)=m1?i=1∑m?(hθ?(x(i))?y(i))xj(i)?+mλ?θj? (j=1,2,3,...,n)(j=1,2,3,...,n)(j = 1,2,3,...,n)(j=1,2,3,...,n)

def gradient(theta, X, y):

gra = X.T.dot(sigmoid(X.dot(theta.T))-y) / len(X)

return gra # n*1維

def gradient_reg(theta, X, y, c=1):

reg = (c/len(X))*theta

reg[0] = 0

return gradient(theta, X, y) + reg

1.4 One-vs-all Classification

theta = np.zeros(401)

lambda_01 = 0.8

邏輯回歸多元分類的核心:根據每一個類別分別創建分類器,即分別進行二元邏輯回歸運算,(單獨把每一類挑出來做正樣本,其余九類全做負樣本訓練多個回歸模型,再根據訓練出來的十組參數計算每一種分類的概率,挑選概率最大者作為預測結果)

def tnc_resolver(J_function_reg, theta, gradient_reg, X, y, lambda_n):

return opt.fmin_tnc(J_function_reg, x0 = theta, fprime = gradient_reg, args = (X, y, lambda_n))

def cg_resolver(J_function_reg, theta, gradient_reg, X, y, lambda_n):

return opt.fmin_cg(J_function_reg, x0 = theta, fprime = gradient_reg, args = (X, y, lambda_n), maxiter = 50, disp=False, full_output=True)

y[y == 10] = 0 #把原來結果值里的10改成零,因為在手寫訓練數據里就是0表示10

_y = y.reshape(-1) #把y從向量變成一維數組

def make_multiclassifier(lambda_01):

ten_thetas = np.zeros((10, 401)) # 準備空白數組用來裝載各分類器參數

for i in range(10):

reload_y = np.where(_y == i, 1, 0)

result = tnc_resolver(J_function_reg, theta, gradient_reg, X, reload_y, lambda_01)#opt.fmin_tnc(J_function_reg, x0=theta, fprime=gradient_reg, args=(X, reload_y, lambda_01))

theta_i = result[0]

ten_thetas[i] = theta_i

print('Optimizing for handwritten number {}'.format(i))

print('Done!')

return ten_thetas

ten_thetas_01 = make_multiclassifier(lambda_01)

Optimizing for handwritten number 0

Optimizing for handwritten number 1

Optimizing for handwritten number 2

Optimizing for handwritten number 3

Optimizing for handwritten number 4

Optimizing for handwritten number 5

Optimizing for handwritten number 6

Optimizing for handwritten number 7

Optimizing for handwritten number 8

Optimizing for handwritten number 9

Done!

1.4.1 one-vs-all prediction

def logistic_1vsAll_single(x, ten_thetas): # 求一個樣本的預測值

list_prob = []

for theta_i in ten_thetas:

probability_i = sigmoid(x.dot(theta_i.T))

list_prob.append(probability_i)

series_prob = pd.Series(list_prob)

most_like = series_prob.values.argmax() #這里的numpy的argmax()方法求最大值出現位置

return most_like, series_prob

def logistic_1vsAll_more(X, ten_thetas): # 一次性求所有樣本的預測值

result = X.dot(ten_thetas.T) #5000*10 這里是5000個例子的10個分類器的概率值

df_result = pd.DataFrame(result)

y_predict = df_result.idxmax(axis = 1) #pandas的argmax()方法用不了,用idxmax()方法求最大值位置

return y_predict

y_predict_01 = logistic_1vsAll_more(X,ten_thetas_01)

y_predict_01.value_counts()

8 1068

0 546

6 533

7 528

1 499

4 499

3 466

2 399

9 321

5 141

dtype: int64

(y_predict_01.values == _y).mean()

0.766

去掉正則化,準確率提高,但泛化率降低

lambda_02 = 0

ten_thetas_02 = make_multiclassifier(lambda_02)

D:\Program Files (x86)\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: RuntimeWarning: divide by zero encountered in log

D:\Program Files (x86)\Anaconda3\lib\site-packages\ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in multiply

Optimizing for handwritten number 0

Optimizing for handwritten number 1

Optimizing for handwritten number 2

Optimizing for handwritten number 3

Optimizing for handwritten number 4

Optimizing for handwritten number 5

Optimizing for handwritten number 6

Optimizing for handwritten number 7

Optimizing for handwritten number 8

Optimizing for handwritten number 9

Done!

y_predict_02 = logistic_1vsAll_more(X,ten_thetas_02)

y_predict_02.value_counts()

6 504

7 503

4 503

9 502

8 502

0 502

1 500

2 499

3 493

5 492

dtype: int64

(y_predict_02.values == _y).mean()

0.9736

2 Neural Networks

利用老師提供的計算好的theta矩陣,前向傳播計算推導y

# 讀取老師已經提供的成品theta矩陣(而非隨機初始化theta矩陣)

theta_neurons = io.loadmat('D:/python/practise/sample/machine-learning-ex3/data/ex3weights.mat')

theta_neurons['Theta1'].shape

(25, 401)

theta_neurons['Theta2'].shape

(10, 26)

theta_neu_1 = theta_neurons['Theta1']

theta_neu_2 = theta_neurons['Theta2']

def forwardpropa_single(a_neu_1):

z_neu_2 = theta_neu_1.dot(a_neu_1)

a_neu_2 = sigmoid(z_neu_2)

a_neu_2_bias = np.insert(a_neu_2, 0, 1)

z_neu_3 = theta_neu_2.dot(a_neu_2_bias)

a_neu_3 = sigmoid(z_neu_3)

result = a_neu_3.argmax()

if result == 9:

return 0

else:

return result + 1

def forwardpropa_more(A_neu_1):

Z_neu_2 = A_neu_1.dot(theta_neu_1.T) #5000*401 dot 401*25 = 5000*25

A_neu_2 = sigmoid(Z_neu_2)

A_neu_2_bias = np.insert(A_neu_2, 0, 1, axis=1) #加入bias列 值為1,5000*26

Z_neu_3 = A_neu_2_bias.dot(theta_neu_2.T) # 5000*26 dot 26*10 = 5000*10

A_neu_3 = sigmoid(Z_neu_3)

y = A_neu_3.argmax(axis=1) + 1

y[y == 10] = 0

return y

forwardpropa_single(X[3251])

6

show_1_number(3251) # 預測值是6 圖像顯示也是6

y_neu_pre = forwardpropa_more(X)

(y_neu_pre == y.T).mean()

0.9752

自己手寫圖片測試

from PIL import Image

用photoshop畫一個黑底白色數字,分辨率20*20,保存為(.png或.jpg)

# 把圖像轉化為數組

img = Image.open('D:/python/practise/sample/machine-learning-ex3/8.png')

scipy.misc.toimage(np.array(img))

# 把數組化的圖像扁平化

img_arr = np.array(img).ravel()

img_arr = np.insert(img_arr, 0, 1)

like = logistic_1vsAll_single(img_arr, ten_thetas_01)

like[0] # 用多元分類器的結果

8

forwardpropa_single(img_arr) # 用前向傳播的結果

8

兩個模型預測結果與實際一致,性能還可以

總結

以上是生活随笔為你收集整理的python分类预测降低准确率_python实现吴恩达机器学习练习3(多元分类器和神经网络)...的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。