日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 人工智能 > 卷积神经网络 >内容正文

卷积神经网络

【Pytorch神经网络实战案例】10 搭建深度卷积神经网络

發(fā)布時間:2024/7/5 卷积神经网络 81 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【Pytorch神经网络实战案例】10 搭建深度卷积神经网络 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

?
識別黑白圖中的服裝圖案(Fashion-MNIST)https://blog.csdn.net/qq_39237205/article/details/123379997基于上述代碼修改模型的組成

1 修改myConNet模型

1.1.1 修改闡述

將模型中的兩個全連接層,變?yōu)槿制骄鼗瘜印?/p>

1.1.2 修改結果

### 1.5 定義模型類 class myConNet(torch.nn.Module):def __init__(self):super(myConNet, self).__init__()# 定義卷積層self.conv1 = torch.nn.Conv2d(in_channels = 1 ,out_channels = 6,kernel_size = 3)self.conv2 = torch.nn.Conv2d(in_channels = 6,out_channels = 12,kernel_size = 3)self.conv3 = torch.nn.Conv2d(in_channels = 12, out_channels=10, kernel_size = 3) # 分為10個類def forward(self,t):# 第一層卷積和池化處理t = self.conv1(t)t = F.relu(t)t = F.max_pool2d(t, kernel_size=2, stride=2)# 第二層卷積和池化處理t = self.conv2(t)t = F.relu(t)t = F.max_pool2d(t, kernel_size=2, stride=2)# 第三層卷積和池化處理t = self.conv3(t)t = F.avg_pool2d(t,kernel_size = t.shape[-2:],stride = t.shape[-2:]) # 設置池化區(qū)域為輸入數(shù)據(jù)的大小(最后兩個維度),完成全局平均化的處理。return t.reshape(t.shape[:2])

2 代碼

import torchvision import torchvision.transforms as transforms import pylab import torch from matplotlib import pyplot as plt import torch.utils.data import torch.nn.functional as F import numpy as np import os os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"# 定義顯示圖像的函數(shù) def imshow(img):print("圖片形狀",np.shape(img))img = img/2 +0.5npimg = img.numpy()plt.axis('off')plt.imshow(np.transpose(npimg,(1,2,0)))### 1.1 自動下載FashionMNIST數(shù)據(jù)集 data_dir = './fashion_mnist' # 設置存放位置 transform = transforms.Compose([transforms.ToTensor()]) # 可以自動將圖片轉化為Pytorch支持的形狀[通道,高,寬],同時也將圖片的數(shù)值歸一化 train_dataset = torchvision.datasets.FashionMNIST(data_dir,train=True,transform=transform,download=True) print("訓練集的條數(shù)",len(train_dataset))### 1.2 讀取及顯示FashionMNIST數(shù)據(jù)集中的數(shù)據(jù) val_dataset = torchvision.datasets.FashionMNIST(root=data_dir,train=False,transform=transform) print("測試集的條數(shù)",len(val_dataset)) ##1.2.1 顯示數(shù)據(jù)集中的數(shù)據(jù) im = train_dataset[0][0].numpy() im = im.reshape(-1,28) pylab.imshow(im) pylab.show() print("當前圖片的標簽為",train_dataset[0][1])### 1.3 按批次封裝FashionMNIST數(shù)據(jù)集 batch_size = 10 #設置批次大小 train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size, shuffle=False)### 1.4 讀取批次數(shù)據(jù)集 ## 定義類別名稱 classes = ('T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle_Boot') sample = iter(train_loader) # 將數(shù)據(jù)集轉化成迭代器 images,labels = sample.next() # 從迭代器中取得一批數(shù)據(jù) print("樣本形狀",np.shape(images)) # 打印樣本形狀 # 輸出 樣本形狀 torch.Size([10, 1, 28, 28]) print("樣本標簽",labels) # 輸出 圖片形狀 torch.Size([3, 32, 302]) imshow(torchvision.utils.make_grid(images,nrow = batch_size)) # 數(shù)據(jù)可視化:make_grid()將該批次的圖片內容組合為一個圖片,用于顯示,nrow用于設置生成圖片中每行的樣本數(shù)量 print(','.join('%5s' % classes[labels[j]] for j in range(len(images)))) # 輸出 Trouser,Trouser,Dress, Bag,Shirt,Sandal,Shirt,Dress, Bag, Bag### 1.5 定義模型類 class myConNet(torch.nn.Module):def __init__(self):super(myConNet, self).__init__()# 定義卷積層self.conv1 = torch.nn.Conv2d(in_channels = 1 ,out_channels = 6,kernel_size = 3)self.conv2 = torch.nn.Conv2d(in_channels = 6,out_channels = 12,kernel_size = 3)self.conv3 = torch.nn.Conv2d(in_channels = 12, out_channels=10, kernel_size = 3) # 分為10個類def forward(self,t):# 第一層卷積和池化處理t = self.conv1(t)t = F.relu(t)t = F.max_pool2d(t, kernel_size=2, stride=2)# 第二層卷積和池化處理t = self.conv2(t)t = F.relu(t)t = F.max_pool2d(t, kernel_size=2, stride=2)# 第三層卷積和池化處理t = self.conv3(t)t = F.avg_pool2d(t,kernel_size = t.shape[-2:],stride = t.shape[-2:]) # 設置池化區(qū)域為輸入數(shù)據(jù)的大小(最后兩個維度),完成全局平均化的處理。return t.reshape(t.shape[:2])if __name__ == '__main__':network = myConNet() # 生成自定義模塊的實例化對象#指定設備device = torch.device("cuda:0"if torch.cuda.is_available() else "cpu")print(device)network.to(device)print(network) # 打印myConNet網(wǎng)絡 ### 1.6 損失函數(shù)與優(yōu)化器criterion = torch.nn.CrossEntropyLoss() #實例化損失函數(shù)類optimizer = torch.optim.Adam(network.parameters(), lr=.01) ### 1.7 訓練模型for epoch in range(2): # 數(shù)據(jù)集迭代2次running_loss = 0.0for i, data in enumerate(train_loader, 0): # 循環(huán)取出批次數(shù)據(jù) 使用enumerate()函數(shù)對循環(huán)計數(shù),第二個參數(shù)為0,表示從0開始inputs, labels = datainputs, labels = inputs.to(device), labels.to(device) #optimizer.zero_grad() # 清空之前的梯度outputs = network(inputs)loss = criterion(outputs, labels) # 計算損失loss.backward() # 反向傳播optimizer.step() # 更新參數(shù)running_loss += loss.item()### 訓練過程的顯示if i % 1000 == 999:print('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss / 2000))running_loss = 0.0print('Finished Training') ### 1.8 保存模型torch.save(network.state_dict(),'./models/CNNFashionMNist.PTH')### 1.9 加載模型,并且使用該模型進行預測network.load_state_dict(torch.load('./models/CNNFashionMNist.PTH')) # 加載模型# 使用模型dataiter = iter(test_loader) # 獲取測試數(shù)據(jù)images, labels = dataiter.next()inputs, labels = images.to(device), labels.to(device)imshow(torchvision.utils.make_grid(images, nrow=batch_size)) # 取出一批數(shù)據(jù)進行展示print('真實標簽: ', ' '.join('%5s' % classes[labels[j]] for j in range(len(images))))# 輸出:真實標簽: Ankle_Boot Pullover Trouser Trouser Shirt Trouser Coat Shirt Sandal Sneakeroutputs = network(inputs) # 調用network對輸入樣本進行預測,得到測試結果outputs_, predicted = torch.max(outputs, 1) # 對于預測結果outputs沿著第1維度找出最大值及其索引值,該索引值即為預測的分類結果print('預測結果: ', ' '.join('%5s' % classes[predicted[j]] for j in range(len(images))))# 輸出:預測結果: Ankle_Boot Pullover Trouser Trouser Pullover Trouser Shirt Shirt Sandal Sneaker### 1.10 評估模型# 測試模型class_correct = list(0. for i in range(10)) # 定義列表,收集每個類的正確個數(shù)class_total = list(0. for i in range(10)) # 定義列表,收集每個類的總個數(shù)with torch.no_grad():for data in test_loader: # 遍歷測試數(shù)據(jù)集images, labels = datainputs, labels = images.to(device), labels.to(device)outputs = network(inputs) # 將每個批次的數(shù)據(jù)輸入模型_, predicted = torch.max(outputs, 1) # 計算預測結果predicted = predicted.to(device)c = (predicted == labels).squeeze() # 統(tǒng)計正確的個數(shù)for i in range(10): # 遍歷所有類別label = labels[i]class_correct[label] = class_correct[label] + c[i].item() # 若該類別正確則+1class_total[label] = class_total[label] + 1 # 根據(jù)標簽中的類別,計算類的總數(shù)sumacc = 0for i in range(10): # 輸出每個類的預測結果Accuracy = 100 * class_correct[i] / class_total[i]print('Accuracy of %5s : %2d %%' % (classes[i], Accuracy))sumacc = sumacc + Accuracyprint('Accuracy of all : %2d %%' % (sumacc / 10.)) # 輸出最終的準確率

輸出:

Accuracy of T-shirt : 72 %
Accuracy of Trouser : 96 %
Accuracy of Pullover : 75 %
Accuracy of Dress : 72 %
Accuracy of ?Coat : 75 %
Accuracy of Sandal : 90 %
Accuracy of Shirt : 35 %
Accuracy of Sneaker : 93 %
Accuracy of ? Bag : 92 %
Accuracy of Ankle_Boot : 92 %
Accuracy of all : 79 %

總結

以上是生活随笔為你收集整理的【Pytorch神经网络实战案例】10 搭建深度卷积神经网络的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內容還不錯,歡迎將生活随笔推薦給好友。