日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

吴恩达作业1:逻辑回归实现猫的分类

發(fā)布時(shí)間:2024/7/23 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 吴恩达作业1:逻辑回归实现猫的分类 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

思路:輸入樣本X與隨機(jī)初始權(quán)重W相乘,利用sigmoid激活函數(shù)輸出值,對(duì)于二分類(lèi)問(wèn)題,用交叉熵?fù)p失函數(shù)來(lái)計(jì)算損失值,通過(guò)交叉熵?fù)p失函數(shù)利用鏈?zhǔn)椒▌t求出W和b的偏導(dǎo),梯度下降更新W和b即可,(梯度下降又有很多,Momentum,Adam等后面在詳細(xì)介紹)剩下的就是迭代次數(shù)和學(xué)習(xí)率的問(wèn)題。

第一課作業(yè)直接給了數(shù)據(jù)集,無(wú)須對(duì)數(shù)據(jù)集操作,下面是讀取數(shù)據(jù)集的代碼,數(shù)據(jù)集鏈接https://download.csdn.net/download/fanzonghao/10539175

命名為:lr_utils.py

?

? import numpy as np import h5pydef load_dataset():train_dataset = h5py.File('datasets/train_catvnoncat.h5', "r")train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set featurestrain_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labelstest_dataset = h5py.File('datasets/test_catvnoncat.h5', "r")test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set featurestest_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labelsclasses = np.array(test_dataset["list_classes"][:]) # the list of classestrain_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes if __name__ == '__main__':train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes=load_dataset()print('訓(xùn)練樣本數(shù)={}'.format(train_set_x_orig.shape))print('訓(xùn)練樣本對(duì)應(yīng)的標(biāo)簽={}'.format(train_set_y_orig.shape))print('前10張訓(xùn)練樣本標(biāo)簽={}'.format(train_set_y_orig[:,:10]))print('測(cè)試樣本數(shù)={}'.format(test_set_x_orig.shape))print('測(cè)試樣本對(duì)應(yīng)的標(biāo)簽={}'.format(test_set_y_orig.shape))print('{}'.format(classes))

可看見(jiàn)打印結(jié)果:209個(gè)樣本,64x64x3。

?

?

下面通過(guò)測(cè)試代碼看看標(biāo)簽0 1 代表的是什么

?

import cv2 from lr_utils import load_dataset train_set_x_orig,train_set_y,test_set_x_orig,test_set_y,classes=load_dataset() cv2.imshow('img0',train_set_x_orig[0]) cv2.waitKey() cv2.imshow('img2',train_set_x_orig[2]) cv2.waitKey()

可知0代表不是貓,1代表是貓。

由于訓(xùn)練的標(biāo)簽結(jié)果是Y=(1,209),X將其拉成一個(gè)樣本一行向量,X=(209,64*64*3)又W*X=Y,故權(quán)重W為(64*64*3,1),最終采用的是樣本X=(64*64*3,209),W=(64*64*3,1),計(jì)算過(guò)程中W要采用轉(zhuǎn)置。

先初始化權(quán)重W,激活函數(shù)采用sigmoid,輸出值A(chǔ);損失函數(shù)采用交叉熵,通過(guò)鏈?zhǔn)椒▌t反向求W和b的導(dǎo)數(shù),在更新W和b即可。計(jì)算過(guò)程中,注意維度的統(tǒng)一,可用assert 判斷。

代碼如下:

? import numpy as np from lr_utils import load_dataset from matplotlib import pyplot as plt """ 函數(shù)功能:邏輯回歸實(shí)現(xiàn)小貓分類(lèi) """ import cv2 #sigmoid激活函數(shù) def sigmoid(z):s=1.0/(1+np.exp(-z))return s #初始化權(quán)值 def initialize_zeros(dim):w=np.zeros(dim).reshape(dim,1)b=0return w,b ######w(64*64*3,1)#傳播過(guò)程 def propagate(w,b,X,Y):m=X.shape[1]A=sigmoid(np.dot(w.T,X)+b)assert np.dot(w.T,X).shape==Y.shapecost=-1/m*(np.dot(Y,np.log(A).T)+np.dot((1-Y),np.log(1-A).T))dw=1/m*(np.dot(X,(A-Y).T))db= 1 / m * (np.sum(A-Y))grads={'dw':dw,'db': db}cost=np.squeeze(cost)return cost,grads ''' 函數(shù)功能:更新權(quán)重 +迭代次數(shù)+學(xué)習(xí)率 返回最終更新的權(quán)重和損失值 ''' def optimize(w,b,X,Y,num_iterations,learning_rate,print_cost=False):costs=[]for i in range(num_iterations):cost, grads = propagate(w, b, X, Y)dw=grads['dw']db=grads['db']w = w - learning_rate * dwb = b - learning_rate * dbif i%100==0:costs.append(cost)if print_cost and i%100==0:print('after iteration %i:%f'%(i,cost))params={'w':w,'b':b}grads = {'dw': dw,'db': db}return params,grads,costs """ 函數(shù)功能:實(shí)現(xiàn)利用更新好的權(quán)重預(yù)測(cè)小貓 """ def predict(w,b,X):m = X.shape[1]Y_prediction=np.zeros((1,m))w=w.reshape(X.shape[0],1)A=sigmoid(np.dot(w.T,X)+b)for i in range(A.shape[1]):if A[0,i]>0.5:Y_prediction[0,i]=1else:Y_prediction[0,i]=0return Y_prediction """ 函數(shù)功能:測(cè)試函數(shù),在編寫(xiě)過(guò)程中,檢查W和b的更新,最終注銷(xiāo)掉,不調(diào)用 """ def test():dim = 2w, b = initialize_zeros(dim)print('initialize w,b=', w, b)w, b, X, Y = np.array([[1], [2]]), 2, np.array([[1, 2], [3, 4]]), np.array([[1, 0]])cost, grads = propagate(w, b, X, Y)print('cost=', cost)print('dw=', grads['dw'])print('db=', grads['db'])params, grads, costs = optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False)print('w', params['w'])print('b', params['b'])print('iterations dw=', grads['dw'])print('iterations db=', grads['db'])print('costs=', costs)Y_prediction = predict(w, b, X)print('Y_prediction=', Y_prediction)def model(X_train,Y_train,X_test,Y_test,num_iterations,learning_rate,print_cost):w,b=initialize_zeros(X_train.shape[0])params, grads,costs=optimize(w,b,X_train,Y_train,num_iterations,learning_rate,print_cost=True)Y_prediction_train=predict(params['w'],params['b'],X_train)Y_prediction_test = predict(params['w'], params['b'], X_test)print('train accuracy is {}'.format(np.mean(Y_prediction_train==Y_train)))print('test accuracy is {}'.format(np.mean(Y_prediction_test==Y_test)))d = {"costs":costs,'w':w,'b':b,'Y_prediction_train':Y_prediction_train,'Y_prediction_test':Y_prediction_test,'learning_rate':learning_rate,'num_iterations':num_iterations}return d if __name__=='__main__':#test()train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()##traintrain_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0],train_set_x_orig.shape[1] *train_set_x_orig.shape[2] * 3).Ttrain_set_x = train_set_x_flatten / 255.train_set_y_flatten = train_set_y.reshape(train_set_y.shape[0], -1)###testtest_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0],test_set_x_orig.shape[1]* test_set_x_orig.shape[2] * 3).Ttest_set_x = test_set_x_flatten / 255.test_set_y_flatten = test_set_y.reshape(test_set_y.shape[0], -1)d=model(train_set_x,train_set_y_flatten,test_set_x,test_set_y_flatten,num_iterations=2000,learning_rate=0.002,print_cost=False)#paint costs lineplt.plot(d['costs'])#print(d['costs'])plt.xlabel('iteration')plt.ylabel('cost')plt.show()#用自帶的小貓檢測(cè)img=cv2.imread('images/my_image2.jpg')imgsize = cv2.resize(img, (64, 64), interpolation=cv2.INTER_CUBIC)cv2.imshow('imgsize', imgsize)cv2.waitKey(0)my_cat=np.array(imgsize.reshape(-1,1))#print(my_cat.shape)My_cat_prediction=predict(d['w'], d['b'], my_cat)print('My_cat_prediction',My_cat_prediction)

打印如下:

測(cè)試精度還行,由于樣本量少,小貓還是預(yù)測(cè)錯(cuò)了。

?

?

?

?

?

?

?

總結(jié)

以上是生活随笔為你收集整理的吴恩达作业1:逻辑回归实现猫的分类的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。