日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > 卷积神经网络 >内容正文

卷积神经网络

吴恩达作业10:用卷积神经网络识别人脸happy(基于Keras)

發(fā)布時間:2024/7/23 卷积神经网络 90 豆豆
生活随笔 收集整理的這篇文章主要介紹了 吴恩达作业10:用卷积神经网络识别人脸happy(基于Keras) 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

數(shù)據(jù)集提供的代碼放在kt_utils.py:

import keras.backend as K import math import numpy as np import h5py import matplotlib.pyplot as pltdef mean_pred(y_true, y_pred):return K.mean(y_pred)def load_dataset():train_dataset = h5py.File('datasets/train_happy.h5', "r")train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set featurestrain_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels#print(train_set_x_orig.shape) ##(600, 64, 64, 3)#print(train_set_y_orig)##(600,)test_dataset = h5py.File('datasets/test_happy.h5', "r")test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set featurestest_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels#print(test_set_x_orig.shape)#(150, 64, 64, 3)#print(test_set_y_orig.shape)#(150,)classes = np.array(test_dataset["list_classes"][:]) # the list of classes#print(classes) #[0,1]train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))#print(train_set_y_orig.shape) (1, 600)#print(test_set_y_orig.shape) (1, 150)return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes #load_dataset()

查看數(shù)據(jù)集:

import kt_utils import cv2 import matplotlib.pyplot as plt train_set_x_orig, train_set_Y, test_set_x_orig, test_set_Y, classes = kt_utils.load_dataset() print('訓練樣本={}'.format(train_set_x_orig.shape)) print('訓練樣本標簽={}'.format(train_set_Y.shape)) print('測試樣本={}'.format(test_set_x_orig.shape)) print('測試樣本標簽={}'.format(test_set_Y.shape)) print('第五個樣本={}'.format(train_set_Y[0,5])) cv2.imshow('1.jpg',train_set_x_orig[5,:,:,:]) cv2.waitKey() print('第六個樣本={}'.format(train_set_Y[0,6])) cv2.imshow('1.jpg',train_set_x_orig[6,:,:,:]) cv2.waitKey() # plt.imshow(train_set_x_orig[5,:,:,:]) # plt.show()

打印結(jié)果:可看出600個訓練樣本,150個測試樣本,size=(64,64,3),其中happy的標簽為1,not happy的標簽為0,故標簽也要經(jīng)過one-hot。

開始訓練模型代碼如下:

import numpy as np import matplotlib.pyplot as plt from keras.models import Model from keras.layers import Input,ZeroPadding2D,Conv2D,BatchNormalization,Activation,MaxPooling2D from keras.layers import Flatten,Dense import kt_utils from keras.preprocessing import image from keras.applications.imagenet_utils import preprocess_input from keras.utils import plot_model from IPython.display import SVG from keras.utils.vis_utils import model_to_dot import time import cv2 """ 轉(zhuǎn)換成one-hot """ def convert_to_one_hot(Y, C):Y = np.eye(C)[Y.reshape(-1)].Treturn Y """ 獲取數(shù)據(jù) 并將標簽轉(zhuǎn)換成one-hot """ def convert_data():train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes=kt_utils.load_dataset()train_x=train_set_x_orig/255test_x = test_set_x_orig / 255train_y=convert_to_one_hot(train_set_y_orig,2).Ttest_y = convert_to_one_hot(test_set_y_orig, 2).T#print(train_y.shape)return train_x,train_y,test_x,test_y """ 查看樣本 """ def test():train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes=kt_utils.load_dataset()X_train=train_set_x_orig/255X_test = test_set_x_orig / 255Y_train=train_set_y_orig.TY_test = test_set_y_orig.Tplt.imshow(train_set_x_orig[63,:,:,:])plt.show() """ 構(gòu)建CNN模型 """ def model(input_shape):X_input=Input(input_shape)print('輸入尺寸={}'.format(X_input.shape))#Zero paddingX=ZeroPadding2D((3,3))(X_input)print('輸補完零尺寸={}'.format(X.shape))#CONV->BN->RELUX=Conv2D(32,(7,7),strides=(1,1),name='conv0')(X)print('第一次卷積尺寸={}'.format(X.shape))X=BatchNormalization(axis=-1,name='bn0')(X)X=Activation('relu')(X)#MAXPOOLX=MaxPooling2D((2,2),name='max_pool')(X)print('第一池化尺寸={}'.format(X.shape))#FLATTEN+FULLYCONNECTEDX=Flatten()(X)X=Dense(2,activation='sigmoid',name='fc')(X)model=Model(inputs=X_input,outputs=X,name='HappyModel')return model """ 測試模型 """ def testModel():train_x, train_y, test_x, test_y=convert_data()#定義好模型結(jié)構(gòu)happyModel=model(input_shape=[64,64,3])#模型編譯happyModel.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])#模型訓練start_time=time.time()print('============模型開始訓練=====================')happyModel.fit(x=train_x,y=train_y,epochs=1,batch_size=32)end_time=time.time()print('train_time={}'.format(end_time-start_time))# save the model#happyModel.save('my_model_v1.h5')print('============模型開始測試=====================')preds=happyModel.evaluate(x=test_x,y=test_y,batch_size=32)print()print('loss={}'.format(preds[0]))print('Test accuarcy={}'.format(preds[1]))#打印參數(shù)happyModel.summary()#可視化模型plot_model(happyModel,to_file='HappyModel.png')SVG(model_to_dot(happyModel).create(prog='dot',format='svg'))#test my_imageprint('============測試自己的照片===================')path = 'images/my_image.jpg'img = image.load_img(path, target_size=(64, 64))plt.imshow(img)plt.show()x = image.img_to_array(img) # (64,64,3)x = x.reshape(1, 64, 64, 3)x=preprocess_input(x)y=happyModel.predict(x)print('預測值={}'.format(y)) def testPicture():# img=cv2.imread('my_image.jpg')# cv2.imshow('img',img)path='images/my_image.jpg'img=image.load_img(path,target_size=(64,64))plt.imshow(img)plt.show()print('img=',img)x=image.img_to_array(img)#(64,64,3)print('x=',x)print(x.shape)#x = np.expand_dims(x, axis=0)#(1,64,64,3)x=x.reshape(64,64,3)print('x=',x)print(x.shape)plt.imshow(x)plt.show() if __name__=='__main__':#test()testModel()#testPicture()

打印結(jié)果:?號代表樣本數(shù),可知池化過后尺寸為(32,32,32)

訓練次數(shù)是一次,結(jié)果如下:可知每張圖片訓練時間為17ms,600張時間為10s,跟自己記錄的train_time差一點點,因為還有別的開支。

測試結(jié)果:測試精度一般

打印出模型參數(shù),并且可視化:可看成conv0參數(shù)=7×7×3×32(W的參數(shù))+32(b的參數(shù))=4736

測試自己的照片:

打印結(jié)果:貌似是happy 貌似又不是 樣本量少 加上訓練次數(shù)少 肯定是不準的。

?

?

總結(jié)

以上是生活随笔為你收集整理的吴恩达作业10:用卷积神经网络识别人脸happy(基于Keras)的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。