【CV】Pytorch一小时入门教程-代码详解
目錄
- 一、關(guān)鍵部分代碼分解
- 1.定義網(wǎng)絡(luò)
- 2.損失函數(shù)(代價(jià)函數(shù))
- 3.更新權(quán)值
- 二、訓(xùn)練完整的分類器
- 1.數(shù)據(jù)處理
- 2. 訓(xùn)練模型(代碼詳解)
- CPU訓(xùn)練
- GPU訓(xùn)練
- CPU版本與GPU版本代碼區(qū)別
以下神經(jīng)網(wǎng)絡(luò)構(gòu)建均以上圖為例
一、關(guān)鍵部分代碼分解
1.定義網(wǎng)絡(luò)
import torch
import torch.nn as nn
import torch.nn.functional as F
# 注釋均為注釋下一行class Net(nn.Module):def __init__(self):super(Net, self).__init__()# 第一個(gè)卷積,輸入1通道,用6個(gè)5×5的卷積核self.conv1 = nn.Conv2d(1, 6, 5)# 第二個(gè)卷積,輸入6個(gè)通道(因?yàn)樯弦粚泳矸e中用了6個(gè)卷積核),用16個(gè)5×5的卷積核self.conv2 = nn.Conv2d(6, 16, 5)# 第一個(gè)全連接層,輸入16個(gè)5×5的圖像(5×5是因?yàn)樽铋_始輸入的是32×32的圖像,然后經(jīng)過2個(gè)卷積2個(gè)池化變成了5×5),用全連接將其變?yōu)?20個(gè)節(jié)點(diǎn)(一維化)self.fc1 = nn.Linear(16 * 5 * 5, 120)# 將120個(gè)節(jié)點(diǎn)變?yōu)?4個(gè)self.fc2 = nn.Linear(120, 84)# 將84個(gè)節(jié)點(diǎn)變?yōu)?0個(gè)self.fc3 = nn.Linear(84, 10)def forward(self, x):# conv1的結(jié)果先用ReLU激活,再以2×2的池化核做max pool池化x = F.max_pool2d(F.relu(self.conv1(x)), (2, 2))# conv2的結(jié)果先用ReLU激活,再以2×2的池化核做max pool池化,(池化核為正方形的時(shí)候,寫2等于寫2×2)x = F.max_pool2d(F.relu(self.conv2(x)), 2)# 將高維向量轉(zhuǎn)為一維x = x.view(-1, self.num_flat_features(x))# 用ReLU激活fc1的結(jié)果x = F.relu(self.fc1(x))# 用ReLU激活fc2的結(jié)果x = F.relu(self.fc2(x))# 計(jì)算出fc3的結(jié)果x = self.fc3(x)return x# 返回總特征數(shù)量def num_flat_features(self, x):size = x.size()[1:] # all dimensions except the batch dimension# 計(jì)算總特征數(shù)量num_features = 1for s in size:num_features *= sreturn num_featuresnet = Net()
print(net)"""
Net((conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))(fc1): Linear(in_features=400, out_features=120, bias=True)(fc2): Linear(in_features=120, out_features=84, bias=True)(fc3): Linear(in_features=84, out_features=10, bias=True)
)
"""params = list(net.parameters())
# 共有5層網(wǎng)絡(luò),每層網(wǎng)絡(luò)包含一個(gè)weight(權(quán)重)和bias(偏差),所以一共有10個(gè)。并且在每一層中,weight保存在前面,bias保存在后面
print(len(params))
print(params[0].size()) # conv1's .weight
print(params[1].size()) # conv1's .bias
"""
10
torch.Size([6, 1, 5, 5])
torch.Size([6])
"""
# randn中4個(gè)參數(shù)分別表示batch_size=1, 1通道(即灰度圖像),圖片尺寸為32x32
input = torch.randn(1, 1, 32, 32)
out = net(input)
print(out)"""
tensor([[ 0.0114, 0.0476, -0.0647, 0.0381, 0.0088, -0.1024, -0.0354, 0.0220,-0.0471, 0.0586]], grad_fn=<AddmmBackward>)
"""#清空緩存
net.zero_grad()
out.backward(torch.randn(1, 10))
Pytorch(六)(模型參數(shù)的遍歷)——net.parameters() & net.named_parameters() & net.state_dict()
2.損失函數(shù)(代價(jià)函數(shù))
"""
input -> conv2d -> relu -> maxpool2d -> conv2d -> relu -> maxpool2d-> view -> linear -> relu -> linear -> relu -> linear-> MSELoss-> loss
"""
output = net(input)
target = torch.randn(10) # a dummy target, for example
target = target.view(1, -1) # make it the same shape as output
criterion = nn.MSELoss()loss = criterion(output, target)
print(loss)print(loss.grad_fn) # MSELoss
print(loss.grad_fn.next_functions[0][0]) # Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU"""
<MseLossBackward object at 0x7efbcad51a58>
<AddmmBackward object at 0x7efbcad51b38>
<AccumulateGrad object at 0x7efbcad51b38>
"""# 反向傳播計(jì)算梯度(就是偏導(dǎo)數(shù))
net.zero_grad() # zeroes the gradient buffers of all parametersprint('conv1.bias.grad before backward')
print(net.conv1.bias.grad)loss.backward()print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)"""
conv1.bias.grad before backward
tensor([0., 0., 0., 0., 0., 0.])
conv1.bias.grad after backward
tensor([ 0.0087, -0.0073, 0.0013, 0.0006, -0.0107, -0.0042])
"""
3.更新權(quán)值
這里用隨機(jī)梯度下降(SGD),梯度下降中的偏導(dǎo)數(shù)值(即梯度)gradientgradientgradient已經(jīng)在上一節(jié)中計(jì)算出,更新權(quán)值的公式為:weight=weight?learningrate?gradientweight=weight-learning rate*gradientweight=weight?learningrate?gradient,寫成代碼如下
# 梯度下降更新權(quán)值的代碼
learning_rate = 0.01
for f in net.parameters():f.data.sub_(f.grad.data * learning_rate)
在神經(jīng)網(wǎng)絡(luò)中用下面的代碼
import torch.optim as optim# create your optimizer
optimizer = optim.SGD(net.parameters(), lr=0.01)# in your training loop:
optimizer.zero_grad() # zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() # Does the update
二、訓(xùn)練完整的分類器
1.數(shù)據(jù)處理
當(dāng)我們需要處理圖像、文本、音頻或者視頻的時(shí)候,可以使用標(biāo)準(zhǔn)的python庫來將這些數(shù)據(jù)轉(zhuǎn)化為numpy array,然后可以其再轉(zhuǎn)化為Tensor。下面列出一些相應(yīng)的python庫:
- 圖片:Pillow、OpenCV
- 音頻:scipy、librosa
- 文本:原始的Python、Cython、NLTK、SpaCy
對于視覺領(lǐng)域,有torchvision,他可以將很多知名數(shù)據(jù)的數(shù)據(jù)集涵蓋在內(nèi)。并且,通過torchvision.datasets和torch.utils.data.DataLoader進(jìn)行數(shù)據(jù)的轉(zhuǎn)化。
在這里使用 CIFAR10 數(shù)據(jù)集,它有以下各類: ‘a(chǎn)irplane’, ‘a(chǎn)utomobile’, ‘bird’, ‘cat’, ‘deer’, ‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’。在這個(gè)數(shù)據(jù)集中的圖像尺寸都是32×32的。
2. 訓(xùn)練模型(代碼詳解)
訓(xùn)練步驟:
1.裝載數(shù)據(jù),并將其標(biāo)準(zhǔn)化;
2.定義CNN;
3.定義損失函數(shù);
4.訓(xùn)練神經(jīng)網(wǎng)絡(luò);
5.測試網(wǎng)絡(luò);
CPU訓(xùn)練
import torch
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
import torch.utils.data
import torch.nn as nn
import torch.nn.functional as F# 數(shù)據(jù)處理
transform = transforms.Compose([transforms.ToTensor(), # 轉(zhuǎn)換為tensor格式# 對圖像進(jìn)行標(biāo)準(zhǔn)化,即讓數(shù)據(jù)集的圖像的均值變?yōu)?,標(biāo)準(zhǔn)差變?yōu)?,把圖片3個(gè)通道中的數(shù)據(jù)整理到[-1,1]的區(qū)間中# 輸入的參數(shù)第一個(gè)括號內(nèi)是3個(gè)通道的均值,第二個(gè)是3個(gè)通道的標(biāo)準(zhǔn)差,這些數(shù)據(jù)需要自己算好再放進(jìn)這個(gè)函數(shù)里,不然每次運(yùn)行normalize函數(shù)都要遍歷一遍數(shù)據(jù)集transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])# 定義訓(xùn)練集
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
# 如果沒有import torch.utils.data,這里會(huì)出現(xiàn)Warning:Cannot find reference ‘data‘ in ‘__init__.py‘
# torch.utils.data.DataLoader用于將已有的數(shù)據(jù)讀取接口的輸入按照batch size封裝成Tensor
# shuffle參數(shù)表示是否在每個(gè)epoch后打亂數(shù)據(jù);num_workers表示用多少個(gè)子進(jìn)程加載數(shù)據(jù),0表示數(shù)據(jù)將在主進(jìn)程中加載,默認(rèn)為0,這里不知道為啥設(shè)置多線程會(huì)報(bào)錯(cuò)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True)
# trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False)
# testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)classes = ('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')# 顯示圖片函數(shù)
def imshow(img):img = img / 2 + 0.5 # 逆歸一化,公式似乎是img/(均值*batchsize)+方差npimg = img.numpy()# plt.imshow()中應(yīng)有的參數(shù)為(imagesize1,imagesize2,channels),在RGB圖像中,channels=3,imagesize1為行數(shù),imagesize2為列數(shù),即分別為圖片的高和寬# npimg中的參數(shù)順序?yàn)?channels,imagesize1,imagesize2)# np.transpose(0,2,1)表示將數(shù)據(jù)的第二維和第三維交換# 則np.transpose(npimg, (1, 2, 0))就能將npimg變成(imagesize1,imagesize2,channels)的參數(shù)順序,然后輸入plt.imshow()plt.imshow(np.transpose(npimg, (1, 2, 0)))plt.show()# get some random training images得到隨機(jī)的batchsize張圖片(隨機(jī)是因?yàn)橹岸xtrainloader的時(shí)候設(shè)置開啟了隨機(jī))
# dataloader本質(zhì)上是一個(gè)可迭代對象,可以使用iter()進(jìn)行訪問,采用iter(dataloader)返回的是一個(gè)迭代器,然后可以使用next()或者enumerate訪問
dataiter = iter(trainloader)
# 訪問iter(dataloader)時(shí),imgs在前,labels在后,分別表示:圖像轉(zhuǎn)換0~1之間的值,labels為標(biāo)簽值(在這里labels就是圖像所屬的分類的標(biāo)號)。并且imgs和labels是按批次進(jìn)行輸入的。
# 因?yàn)橹霸O(shè)置了batch_size=4,所以這里的images中會(huì)有4張圖片
images, labels = dataiter.next()# show images
# torchvision.utils.make_grid()將多張圖片組合成一張圖片,padding為多張圖片之間的間隙
imshow(torchvision.utils.make_grid(images, padding=2))
# 按順序輸出四張圖片的標(biāo)簽(所屬分類的名字)
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))# 定義網(wǎng)絡(luò)
class Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(3, 6, (5, 5))self.pool = nn.MaxPool2d(2, 2)self.conv2 = nn.Conv2d(6, 16, (5, 5))self.fc1 = nn.Linear(16 * 5 * 5, 120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 10)def forward(self, x):x = self.pool(F.relu(self.conv1(x)))x = self.pool(F.relu(self.conv2(x)))x = x.view(-1, 16 * 5 * 5)x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)return xnet = Net()for epoch in range(2): # 多次循環(huán)訪問整個(gè)數(shù)據(jù)集(這里用了兩個(gè)epoch,即循環(huán)訪問2遍整個(gè)數(shù)據(jù)集)running_loss = 0.0for i, data in enumerate(trainloader, 0):# get the inputs 得到輸入的batchsize張圖片,放在inputs中,labels存儲該圖片所屬分類的名字inputs, labels = data# 計(jì)算交叉熵https://zhuanlan.zhihu.com/p/98785902criterion = nn.CrossEntropyLoss()# optim.SGD表示使用隨機(jī)梯度下降算法# lr是學(xué)習(xí)率;momentum是動(dòng)量(在梯度下降算法中添加動(dòng)量法)https://blog.csdn.net/weixin_40793406/article/details/84666803optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)# 歸零參數(shù)梯度optimizer.zero_grad()# forward + backward + optimize# 向神經(jīng)網(wǎng)絡(luò)輸入數(shù)據(jù),然后得到輸出outputsoutputs = net(inputs)# 輸入神經(jīng)網(wǎng)絡(luò)輸出的預(yù)測和實(shí)際的數(shù)據(jù)的labels,計(jì)算交叉熵(偏差)loss = criterion(outputs, labels)# 將誤差反向傳播loss.backward()# 更新所有參數(shù)(權(quán)重)optimizer.step()# 累加經(jīng)過這一個(gè)batchsize張圖片學(xué)習(xí)后的誤差# 《pytorch學(xué)習(xí):loss為什么要加item()》https://blog.csdn.net/github_38148039/article/details/107144632running_loss += loss.item()if i % 2000 == 1999: # 每2000個(gè)mini-batches(即2000個(gè)batchsize次)打印一次,然后歸零running_lossprint('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss / 2000))running_loss = 0.0print('Finished Training')# 創(chuàng)建測試集的迭代器
dataiter = iter(testloader)
# 讀取測試集中的前四張圖片
images, labels = dataiter.next()# 顯示前面讀取出來的四張圖片和其所屬的分類的名字(label)
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))# 傳入神經(jīng)網(wǎng)絡(luò)得到預(yù)測結(jié)果
outputs = net(images)# 輸入的第一個(gè)參數(shù)是softmax輸出的一個(gè)tensor,這里就是outputs所存儲的內(nèi)容;第二個(gè)參數(shù)是max函數(shù)索引的維度,0是取每列的最大值,1是每行的最大值
# 返回的第一個(gè)參數(shù)是預(yù)測出的實(shí)際概率,由于我們不需要得知實(shí)際的概率,所以在返回的第一個(gè)參數(shù)填入_不讀取,第二個(gè)返回是概率的最大值的索引,存在predicted中
# 《torch.max()使用詳解》https://www.jianshu.com/p/3ed11362b54f
_, predicted = torch.max(outputs, 1)# 打印四張圖片預(yù)測的分類
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]for j in range(4)))# 計(jì)數(shù)器初始化置零
correct = 0
total = 0
# 計(jì)算總準(zhǔn)確率
# 被該語句包裹起來的代碼將不會(huì)被跟蹤梯度,如果測試集進(jìn)行的運(yùn)算被跟蹤進(jìn)度可能會(huì)導(dǎo)致顯存爆炸
with torch.no_grad():for data in testloader:# 讀取數(shù)據(jù)(batchsize個(gè)數(shù)據(jù),這里為4張圖片)images, labels = data# 得到神經(jīng)網(wǎng)絡(luò)的輸出outputs = net(images)# 返回每行概率最大值的索引_, predicted = torch.max(outputs.data, 1)# labels.size(0)指batchsize的值,這里batchsize=4total += labels.size(0)# predicted == labels對predicted和labels中的每一項(xiàng)判斷是否相等# (predicted == labels).sum()返回一個(gè)tensor,tensor中是判斷為真的數(shù)量,比如有一項(xiàng)是相同的,則返回tensor(1)# 如果有一項(xiàng)是相同的,(predicted == labels).sum().item()返回1# correct在這里即為4張圖片中預(yù)測正確的數(shù)量(這里計(jì)算的是總概率)correct += (predicted == labels).sum().item()# 輸出神經(jīng)網(wǎng)絡(luò)在測試集上的準(zhǔn)確率
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# 計(jì)算每個(gè)分類的準(zhǔn)確率
# 被該語句包裹起來的代碼將不會(huì)被跟蹤梯度
with torch.no_grad():for data in testloader:# 讀取數(shù)據(jù)(batchsize個(gè)數(shù)據(jù),這里為4張圖片)images, labels = data# 得到神經(jīng)網(wǎng)絡(luò)的輸出outputs = net(images)# 返回每行概率最大值的索引_, predicted = torch.max(outputs, 1)# squeeze()用于去除維數(shù)為1的維度,比如1行3列矩陣就會(huì)去掉行這個(gè)維度,變成第一維含有3個(gè)元素c = (predicted == labels).squeeze()for i in range(4):# label存儲當(dāng)前圖像所屬分類的索引號label = labels[i]class_correct[label] += c[i].item()class_total[label] += 1
# 輸出神經(jīng)網(wǎng)絡(luò)在測試集上的每個(gè)分類準(zhǔn)確率
for i in range(10):print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
GPU訓(xùn)練
import torch
import torch.optim as optim
import torchvision
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
import numpy as np
import torch.utils.data
import torch.nn as nn
import torch.nn.functional as F# 數(shù)據(jù)處理
transform = transforms.Compose([transforms.ToTensor(), # 轉(zhuǎn)換為tensor格式# 對圖像進(jìn)行標(biāo)準(zhǔn)化,即讓數(shù)據(jù)集的圖像的均值變?yōu)?,標(biāo)準(zhǔn)差變?yōu)?,把圖片3個(gè)通道中的數(shù)據(jù)整理到[-1,1]的區(qū)間中# 輸入的參數(shù)第一個(gè)括號內(nèi)是3個(gè)通道的均值,第二個(gè)是3個(gè)通道的標(biāo)準(zhǔn)差,這些數(shù)據(jù)需要自己算好再放進(jìn)這個(gè)函數(shù)里,不然每次運(yùn)行normalize函數(shù)都要遍歷一遍數(shù)據(jù)集transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])# 定義訓(xùn)練集
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download=True, transform=transform)
# 如果沒有import torch.utils.data,這里會(huì)出現(xiàn)Warning:Cannot find reference ‘data‘ in ‘__init__.py‘
# torch.utils.data.DataLoader用于將已有的數(shù)據(jù)讀取接口的輸入按照batch size封裝成Tensor
# shuffle參數(shù)表示是否在每個(gè)epoch后打亂數(shù)據(jù);num_workers表示用多少個(gè)子進(jìn)程加載數(shù)據(jù),0表示數(shù)據(jù)將在主進(jìn)程中加載,默認(rèn)為0,這里不知道為啥設(shè)置多線程會(huì)報(bào)錯(cuò)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True)
# trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False)
# testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)classes = ('plane', 'car', 'bird', 'cat','deer', 'dog', 'frog', 'horse', 'ship', 'truck')# 顯示圖片函數(shù)
def imshow(img):img = img / 2 + 0.5 # 逆歸一化,公式似乎是img/(均值*batchsize)+方差# numpy不能讀取CUDA tensor 需要將它轉(zhuǎn)化為 CPU tensornpimg = img.cpu().numpy()# plt.imshow()中應(yīng)有的參數(shù)為(imagesize1,imagesize2,channels),在RGB圖像中,channels=3,imagesize1為行數(shù),imagesize2為列數(shù),即分別為圖片的高和寬# npimg中的參數(shù)順序?yàn)?channels,imagesize1,imagesize2)# np.transpose(0,2,1)表示將數(shù)據(jù)的第二維和第三維交換# 則np.transpose(npimg, (1, 2, 0))就能將npimg變成(imagesize1,imagesize2,channels)的參數(shù)順序,然后輸入plt.imshow()plt.imshow(np.transpose(npimg, (1, 2, 0)))plt.show()# get some random training images得到隨機(jī)的batchsize張圖片(隨機(jī)是因?yàn)橹岸xtrainloader的時(shí)候設(shè)置開啟了隨機(jī))
# dataloader本質(zhì)上是一個(gè)可迭代對象,可以使用iter()進(jìn)行訪問,采用iter(dataloader)返回的是一個(gè)迭代器,然后可以使用next()或者enumerate訪問
dataiter = iter(trainloader)
# 訪問iter(dataloader)時(shí),imgs在前,labels在后,分別表示:圖像轉(zhuǎn)換0~1之間的值,labels為標(biāo)簽值(在這里labels就是圖像所屬的分類的標(biāo)號)。并且imgs和labels是按批次進(jìn)行輸入的。
# 因?yàn)橹霸O(shè)置了batch_size=4,所以這里的images中會(huì)有4張圖片
images, labels = dataiter.next()# show images
# torchvision.utils.make_grid()將多張圖片組合成一張圖片,padding為多張圖片之間的間隙
imshow(torchvision.utils.make_grid(images, padding=2))
# 按順序輸出四張圖片的標(biāo)簽(所屬分類的名字)
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))# 定義網(wǎng)絡(luò)
class Net(nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(3, 6, (5, 5))self.pool = nn.MaxPool2d(2, 2)self.conv2 = nn.Conv2d(6, 16, (5, 5))self.fc1 = nn.Linear(16 * 5 * 5, 120)self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 10)def forward(self, x):x = self.pool(F.relu(self.conv1(x)))x = self.pool(F.relu(self.conv2(x)))x = x.view(-1, 16 * 5 * 5)x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)return xnet = Net()# 如果GPU(CUDA)可用,則用GPU,否則用CPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 打印使用的是GPU/CPU
print(device)
# 將網(wǎng)絡(luò)放置到GPU
net.to(device)for epoch in range(2): # 多次循環(huán)訪問整個(gè)數(shù)據(jù)集(這里用了兩個(gè)epoch,即循環(huán)訪問2遍整個(gè)數(shù)據(jù)集)running_loss = 0.0for i, data in enumerate(trainloader, 0):# get the inputs 得到輸入的batchsize張圖片,放在inputs中,labels存儲該圖片所屬分類的名字inputs, labels = data# 將數(shù)據(jù)放置到GPU上inputs, labels = inputs.to(device), labels.to(device)# 計(jì)算交叉熵https://zhuanlan.zhihu.com/p/98785902criterion = nn.CrossEntropyLoss()# optim.SGD表示使用隨機(jī)梯度下降算法# lr是學(xué)習(xí)率;momentum是動(dòng)量(在梯度下降算法中添加動(dòng)量法)https://blog.csdn.net/weixin_40793406/article/details/84666803optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)# 歸零參數(shù)梯度optimizer.zero_grad()# forward + backward + optimize# 向神經(jīng)網(wǎng)絡(luò)輸入數(shù)據(jù),然后得到輸出outputsoutputs = net(inputs)# 輸入神經(jīng)網(wǎng)絡(luò)輸出的預(yù)測和實(shí)際的數(shù)據(jù)的labels,計(jì)算交叉熵(偏差)loss = criterion(outputs, labels)# 將誤差反向傳播loss.backward()# 更新所有參數(shù)(權(quán)重)optimizer.step()# 累加經(jīng)過這一個(gè)batchsize張圖片學(xué)習(xí)后的誤差# 《pytorch學(xué)習(xí):loss為什么要加item()》https://blog.csdn.net/github_38148039/article/details/107144632running_loss += loss.item()if i % 2000 == 1999: # 每2000個(gè)mini-batches(即2000個(gè)batchsize次)打印一次,然后歸零running_lossprint('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss / 2000))running_loss = 0.0print('Finished Training')# 創(chuàng)建測試集的迭代器
dataiter = iter(testloader)
# 讀取測試集中的前四張圖片
images, labels = dataiter.next()
# 將數(shù)據(jù)放置到GPU上
images, labels = images.to(device), labels.to(device)# 顯示前面讀取出來的四張圖片和其所屬的分類的名字(label)
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))# 傳入神經(jīng)網(wǎng)絡(luò)得到預(yù)測結(jié)果
outputs = net(images)# 輸入的第一個(gè)參數(shù)是softmax輸出的一個(gè)tensor,這里就是outputs所存儲的內(nèi)容;第二個(gè)參數(shù)是max函數(shù)索引的維度,0是取每列的最大值,1是每行的最大值
# 返回的第一個(gè)參數(shù)是預(yù)測出的實(shí)際概率,由于我們不需要得知實(shí)際的概率,所以在返回的第一個(gè)參數(shù)填入_不讀取,第二個(gè)返回是概率的最大值的索引,存在predicted中
# 《torch.max()使用詳解》https://www.jianshu.com/p/3ed11362b54f
_, predicted = torch.max(outputs, 1)# 打印四張圖片預(yù)測的分類
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]for j in range(4)))# 計(jì)數(shù)器初始化置零
correct = 0
total = 0
# 計(jì)算總準(zhǔn)確率
# 被該語句包裹起來的代碼將不會(huì)被跟蹤梯度,如果測試集進(jìn)行的運(yùn)算被跟蹤進(jìn)度可能會(huì)導(dǎo)致顯存爆炸
with torch.no_grad():for data in testloader:# 讀取數(shù)據(jù)(batchsize個(gè)數(shù)據(jù),這里為4張圖片)images, labels = data# 將數(shù)據(jù)放置到GPU上images, labels = images.to(device), labels.to(device)# 得到神經(jīng)網(wǎng)絡(luò)的輸出outputs = net(images)# 返回每行概率最大值的索引_, predicted = torch.max(outputs.data, 1)# labels.size(0)指batchsize的值,這里batchsize=4total += labels.size(0)# predicted == labels對predicted和labels中的每一項(xiàng)判斷是否相等# (predicted == labels).sum()返回一個(gè)tensor,tensor中是判斷為真的數(shù)量,比如有一項(xiàng)是相同的,則返回tensor(1)# 如果有一項(xiàng)是相同的,(predicted == labels).sum().item()返回1# correct在這里即為4張圖片中預(yù)測正確的數(shù)量(這里計(jì)算的是總概率)correct += (predicted == labels).sum().item()# 輸出神經(jīng)網(wǎng)絡(luò)在測試集上的準(zhǔn)確率
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# 計(jì)算每個(gè)分類的準(zhǔn)確率
# 被該語句包裹起來的代碼將不會(huì)被跟蹤梯度
with torch.no_grad():for data in testloader:# 讀取數(shù)據(jù)(batchsize個(gè)數(shù)據(jù),這里為4張圖片)images, labels = data# 將數(shù)據(jù)放置到GPU上images, labels = images.to(device), labels.to(device)# 得到神經(jīng)網(wǎng)絡(luò)的輸出outputs = net(images)# 返回每行概率最大值的索引_, predicted = torch.max(outputs, 1)# squeeze()用于去除維數(shù)為1的維度,比如1行3列矩陣就會(huì)去掉行這個(gè)維度,變成第一維含有3個(gè)元素c = (predicted == labels).squeeze()for i in range(4):# label存儲當(dāng)前圖像所屬分類的索引號label = labels[i]class_correct[label] += c[i].item()class_total[label] += 1
# 輸出神經(jīng)網(wǎng)絡(luò)在測試集上的每個(gè)分類準(zhǔn)確率
for i in range(10):print('Accuracy of %5s : %2d %%' % (classes[i], 100 * class_correct[i] / class_total[i]))
CPU版本與GPU版本代碼區(qū)別
GPU版本相較于CPU版本的主要區(qū)別有:
- 第40行
npimg = img.cpu().numpy()
- 第86~91行
# 如果GPU(CUDA)可用,則用GPU,否則用CPU
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# 打印使用的是GPU/CPU
print(device)
# 將網(wǎng)絡(luò)放置到GPU
net.to(device)
- 第101行
inputs, labels = inputs.to(device), labels.to(device)
- 第135行
images, labels = images.to(device), labels.to(device)
- 第163行
images, labels = images.to(device), labels.to(device)
- 第189行
images, labels = images.to(device), labels.to(device)
參考自以下文檔
pytorch一小時(shí)教程
pytorch基礎(chǔ)入門教程/一小時(shí)學(xué)會(huì)pytorch_CSDN
如若文中有誤,歡迎批評指正
總結(jié)
以上是生活随笔為你收集整理的【CV】Pytorch一小时入门教程-代码详解的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 个性签名古风诗意
- 下一篇: 【CV】Pytorch一小时教程添加损失