日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

吴恩达深度学习C4W1(Pytorch)实现

發(fā)布時(shí)間:2023/12/31 pytorch 28 豆豆
生活随笔 收集整理的這篇文章主要介紹了 吴恩达深度学习C4W1(Pytorch)实现 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

問題描述

此次作業(yè)需要處理的任務(wù)在之前的任務(wù)中出現(xiàn)過:完成一個(gè)多分類器,識(shí)別圖像中手勢代表的數(shù)字:

與之前作業(yè)不同的是,需要在神經(jīng)網(wǎng)絡(luò)中加入卷積層(CONV)和池化層(POOL),神經(jīng)網(wǎng)絡(luò)的大致結(jié)構(gòu)為:

CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLCONNECTED

import torch import h5py import numpy as np from torch import nn from torch.utils.data import DataLoader, TensorDataset from torch.utils.tensorboard import SummaryWriterwriter = SummaryWriter('logs')

1 - 數(shù)據(jù)預(yù)處理

讀取數(shù)據(jù)并創(chuàng)建數(shù)據(jù)接口。

Hint:TensorDataset 可以用來對 tensor 進(jìn)行打包,就好像 python 中的 zip 功能。該類通過每一個(gè) tensor 的第一個(gè)維度進(jìn)行索引。因此,該類中的 tensor 第一維度必須相等。

def load_dataset():# 讀取數(shù)據(jù)train_dataset = h5py.File('datasets/train_signs.h5', 'r')train_set_x_orig = torch.tensor(train_dataset['train_set_x'][:])train_set_y_orig = torch.tensor(train_dataset['train_set_y'][:])test_dataset = h5py.File('datasets/test_signs.h5', 'r')test_set_x_orig = torch.tensor(test_dataset['test_set_x'][:])test_set_y_orig = torch.tensor(test_dataset['test_set_y'][:])# 數(shù)據(jù)歸一化train_set_x = train_set_x_orig.permute(0, 3, 1, 2) / 255test_set_x = test_set_x_orig.permute(0, 3, 1, 2) / 255return train_set_x, train_set_y_orig, test_set_x, test_set_y_orig def data_loader(X, Y, batch_size=64):# 創(chuàng)建數(shù)據(jù)接口dataset = TensorDataset(X, Y)return DataLoader(dataset, batch_size, shuffle=True) train_x, train_y, test_x, test_y = load_dataset() print(f'train_x.shape = {train_x.shape}') print(f'train_y.shape = {train_y.shape}') print(f'test_x.shape = {test_x.shape}') print(f'test_y.shape = {test_y.shape}')

可以看到對應(yīng)的數(shù)據(jù)形狀。
注意:數(shù)據(jù)歸一化時(shí)進(jìn)行了維度變換,因?yàn)镃onv2d的輸入數(shù)據(jù)格式為(N,Cin,Hin,Win)(N, C_{in}, H_{in}, W_{in})(N,Cin?,Hin?,Win?)

2 - 模型封裝

將整個(gè)神經(jīng)網(wǎng)絡(luò)封裝成類CNN,包含兩個(gè)卷積層和一個(gè)全連接層,卷積層的操作包括:卷積,非線性激活和最大池化。
前向傳播函數(shù)forward中,依次計(jì)算各層的輸出,需要注意的是第二層卷積層的輸出送到全連接層之前需要進(jìn)行維度變化,將單個(gè)樣本轉(zhuǎn)換成一維向量。

注意:前向傳播時(shí)不需要經(jīng)過softmax層,因?yàn)閾p失函數(shù)包含softmax的功能。

由于Pytorch的padding沒有SAME模式,因此模型構(gòu)建根據(jù)自己的設(shè)計(jì)進(jìn)行更改,使用到的數(shù)學(xué)公式如下:
Hout×Wout=?Hin+2p?f2+1?×?Win+2p?f2+1?H_{out} \times W_{out} = \left\lfloor\frac{H_{in}+2p-f}{2}+1\right\rfloor \times \left\lfloor\frac{W_{in}+2p-f}{2}+1\right\rfloorHout?×Wout?=?2Hin?+2p?f?+1?×?2Win?+2p?f?+1?

具體的CNN網(wǎng)絡(luò)框架如下圖所示:

class CNN(nn.Module):def __init__(self) -> None:super().__init__()self.conv1 = nn.Sequential( # Layer 1, input: (3, 64, 64)nn.Conv2d(in_channels=3, out_channels=8, kernel_size=3, stride=1, padding=1),nn.ReLU(),nn.MaxPool2d(kernel_size=4, stride=4, padding=0))self.conv2 = nn.Sequential( # Layer 2nn.Conv2d(in_channels=8, out_channels=16, kernel_size=5, stride=1, padding=2),nn.ReLU(),nn.MaxPool2d(kernel_size=4, stride=4, padding=0))self.fc = nn.Linear(16 * 4 * 4, 6)self.softmax = nn.Softmax(dim=1)def forward(self, x):x = self.conv1(x)x = self.conv2(x)# 展平x = x.reshape(x.shape[0], -1)x = self.fc(x)return xdef predict(self, x):output = self.forward(x)pred = self.softmax(output)return torch.max(pred, dim=1)[1]

3 - 構(gòu)建訓(xùn)練模型

def model(train_x, train_y, lr = 0.0009, epochs = 100, batch_size = 64, pc = True):cnn = CNN()# 加載數(shù)據(jù)train_loader = data_loader(train_x, train_y)# 使用交叉熵?fù)p失函數(shù),包含softmaxloss_fn = nn.CrossEntropyLoss()# 使用Adam優(yōu)化算法optimizer = torch.optim.Adam(cnn.parameters(), lr=lr)# 迭代更新for e in range(epochs):epoch_cost = 0for step, (batch_x, batch_y) in enumerate(train_loader):# 前向傳播y_pred = cnn.forward(batch_x)# 損失函數(shù)loss = loss_fn(y_pred, batch_y)epoch_cost += loss# 梯度歸零optimizer.zero_grad()# 反向傳播loss.backward()# 更新參數(shù)optimizer.step()epoch_cost /= step + 1if e % 5 == 0:writer.add_scalar(tag=f'CNN-lr={lr},epochs={epochs}', scalar_value=epoch_cost, global_step=e)if pc:print(f'epoch={e},loss={epoch_cost}')# 評(píng)估準(zhǔn)確度y_pred = cnn.predict(train_x)print(f'Train Accuracy: {torch.sum(y_pred == train_y) / y_pred.shape[0] * 100:.2f}%')# 保存學(xué)習(xí)后的參數(shù)torch.save(cnn.state_dict(), 'cnn_params.pkl')print('參數(shù)已保存到本地pkl文件')return cnn cnn = model(train_x, train_y, epochs=200)

損失函數(shù)圖像如下

對模型進(jìn)行評(píng)估

train_pred = cnn.predict(train_x) test_pred = cnn.predict(test_x) print(f'Test Accuracy: {torch.sum(train_pred == train_y) / train_pred.shape[0] * 100:.2f}%') print(f'Test Accuracy: {torch.sum(test_pred == test_y) / test_pred.shape[0] * 100:.2f}%')


參考:實(shí)現(xiàn)卷積神經(jīng)網(wǎng)絡(luò):吳恩達(dá)Course 4-卷積神經(jīng)網(wǎng)絡(luò)-week1作業(yè) pytorch版

總結(jié)

以上是生活随笔為你收集整理的吴恩达深度学习C4W1(Pytorch)实现的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。