日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

【pytorch】pytorch自定义训练vgg16和测试数据集 微调resnet18全连接层

發(fā)布時(shí)間:2024/9/30 编程问答 42 豆豆
生活随笔 收集整理的這篇文章主要介紹了 【pytorch】pytorch自定义训练vgg16和测试数据集 微调resnet18全连接层 小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

自己定義模型

測試:

correct = 0total = 0for data in test_loader:img,label = dataoutputs = net(Variable(img))_,predict = torch.max(outputs.data,1)total += label.size(0)correct += (predict == label).sum()print(str(predict)+','+str(label))print(100*correct/total)

輸出:

預(yù)測錯(cuò)誤還是挺大的,居然全是1
完整代碼:

```python import torch.nn.functional as Fimport torch import torch.nn as nn from torch.autograd import Variablefrom torchvision import transforms from torch.utils.data.dataset import Dataset from torch.utils.data.dataloader import DataLoader from PIL import Imageimport torch.optim as optim import os# ***************************初始化一些函數(shù)******************************** # torch.cuda.set_device(gpu_id)#使用GPU learning_rate = 0.0001 # 學(xué)習(xí)率的設(shè)置# *************************************數(shù)據(jù)集的設(shè)置**************************************************************************** root = os.getcwd() + '\\data\\' # 數(shù)據(jù)集的地址# 定義讀取文件的格式 def default_loader(path):return Image.open(path).convert('RGB')class MyDataset(Dataset):# 創(chuàng)建自己的類: MyDataset,這個(gè)類是繼承的torch.utils.data.Dataset# ********************************** #使用__init__()初始化一些需要傳入的參數(shù)及數(shù)據(jù)集的調(diào)用**********************def __init__(self, txt, transform=None, target_transform=None,test = False,loader=default_loader):super(MyDataset, self).__init__()# 對繼承自父類的屬性進(jìn)行初始化imgs = []fh = open(txt, 'r')# 按照傳入的路徑和txt文本參數(shù),以只讀的方式打開這個(gè)文本for line in fh: # 迭代該列表#按行循環(huán)txt文本中的內(nèi)line = line.strip('\n')line = line.rstrip('\n')# 刪除 本行string 字符串末尾的指定字符,這個(gè)方法的詳細(xì)介紹自己查詢pythonwords = line.split()# 用split將該行分割成列表 split的默認(rèn)參數(shù)是空格,所以不傳遞任何參數(shù)時(shí)分割空格imgs.append((words[0], int(words[1])))# 把txt里的內(nèi)容讀入imgs列表保存,具體是words幾要看txt內(nèi)容而定# 很顯然,根據(jù)我剛才截圖所示txt的內(nèi)容,words[0]是圖片信息,words[1]是lableself.test = testself.imgs = imgsself.transform = transformself.target_transform = target_transform# *************************** #使用__getitem__()對數(shù)據(jù)進(jìn)行預(yù)處理并返回想要的信息**********************def __getitem__(self, index): # 這個(gè)方法是必須要有的,用于按照索引讀取每個(gè)元素的具體內(nèi)容fn, label = self.imgs[index]if self.test is False:# fn是圖片path #fn和label分別獲得imgs[index]也即是剛才每行中word[0]和word[1]的信息img_path = os.path.join("C:\\Users\\pic\\train", fn)else:img_path = os.path.join("C:\\Users\\pic\\test", fn)img = Image.open(img_path).convert('RGB')# 按照路徑讀取圖片if self.transform is not None:img = self.transform(img)# 數(shù)據(jù)標(biāo)簽轉(zhuǎn)換為Tensorreturn img, label# return回哪些內(nèi)容,那么我們在訓(xùn)練時(shí)循環(huán)讀取每個(gè)batch時(shí),就能獲得哪些內(nèi)容# ********************************** #使用__len__()初始化一些需要傳入的參數(shù)及數(shù)據(jù)集的調(diào)用**********************def __len__(self):# 這個(gè)函數(shù)也必須要寫,它返回的是數(shù)據(jù)集的長度,也就是多少張圖片,要和loader的長度作區(qū)分return len(self.imgs)class Net(nn.Module): # 定義網(wǎng)絡(luò),繼承torch.nn.Moduledef __init__(self):super(Net, self).__init__()self.conv1 = nn.Conv2d(3, 6, 5) # 卷積層self.pool = nn.MaxPool2d(2, 2) # 池化層self.conv2 = nn.Conv2d(6, 16, 5) # 卷積層self.fc1 = nn.Linear(16 * 5 * 5, 120) # 全連接層self.fc2 = nn.Linear(120, 84)self.fc3 = nn.Linear(84, 2) # 2個(gè)輸出def forward(self, x): # 前向傳播x = self.pool(F.relu(self.conv1(x))) # F就是torch.nn.functionalx = self.pool(F.relu(self.conv2(x)))x = x.view(-1, 16 * 5 * 5) # .view( )是一個(gè)tensor的方法,使得tensor改變size但是元素的總數(shù)是不變的。# 從卷基層到全連接層的維度轉(zhuǎn)換x = F.relu(self.fc1(x))x = F.relu(self.fc2(x))x = self.fc3(x)return xIMG_MEAN = [0.485, 0.456, 0.406] IMG_STD = [0.229, 0.224, 0.225]net = Net() # 初始化一個(gè)卷積神經(jīng)網(wǎng)絡(luò)leNet- train_data = MyDataset(txt=root + 'num.txt', transform=transforms.Compose([transforms.RandomHorizontalFlip(), # 水平翻轉(zhuǎn)transforms.Resize((32, 32)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小transforms.CenterCrop(32),transforms.ToTensor()])) test_data = MyDataset(txt=root+'test.txt', transform=transforms.Compose([transforms.Resize((32, 32)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小transforms.CenterCrop(32),transforms.ToTensor()]),test=True) train_loader = DataLoader(dataset=train_data, batch_size=227, shuffle=True,drop_last=True) # batch_size:從樣本中取多少張,每一次epoch都會輸入batch_size張 print('num_of_trainData:', len(train_data)) test_loader = DataLoader(dataset=test_data, batch_size=19, shuffle=False)def trainandsave():# 神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)print('h')net = Net()optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) # 學(xué)習(xí)率為0.001criterion = nn.CrossEntropyLoss() # 損失函數(shù)也可以自己定義,我們這里用的交叉熵?fù)p失函數(shù)# 訓(xùn)練部分for epoch in range(10): # 訓(xùn)練的數(shù)據(jù)量為10個(gè)epoch,每個(gè)epoch為一個(gè)循環(huán)# 每個(gè)epoch要訓(xùn)練所有的圖片,每訓(xùn)練完成200張便打印一下訓(xùn)練的效果(loss值)running_loss = 0.0 # 定義一個(gè)變量方便我們對loss進(jìn)行輸出for i, data in enumerate(train_loader, 0): # 這里我們遇到了第一步中出現(xiàn)的trailoader,代碼傳入數(shù)據(jù)# enumerate是python的內(nèi)置函數(shù),既獲得索引也獲得數(shù)據(jù)# get the inputsinputs, labels = data # data是從enumerate返回的data,包含數(shù)據(jù)和標(biāo)簽信息,分別賦值給inputs和labels# wrap them in Variableinputs, labels = Variable(inputs), Variable(labels) # # 轉(zhuǎn)換數(shù)據(jù)格式用Variableoptimizer.zero_grad() # 梯度置零,因?yàn)榉聪騻鞑ミ^程中梯度會累加上一次循環(huán)的梯度# forward + backward + optimizeoutputs = net(inputs) # 把數(shù)據(jù)輸進(jìn)CNN網(wǎng)絡(luò)netloss = criterion(outputs, labels) # 計(jì)算損失值loss.backward() # loss反向傳播optimizer.step() # 反向傳播后參數(shù)更新running_loss += loss.item() # loss累加if i % 9 == 1:print('[%d, %5d] loss: %.3f' %(epoch + 1, i + 1, running_loss / 10)) # 平均損失值running_loss = 0.0 # 這一個(gè)結(jié)束后,就把running_loss歸零,print('Finished Training')# 保存神經(jīng)網(wǎng)絡(luò)torch.save(net, 'net.pkl')# 保存整個(gè)神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)和模型參數(shù)torch.save(net.state_dict(), 'net_params.pkl')

嘗試運(yùn)行vgg16:

找了很久,預(yù)測是參數(shù)的問題,輸入圖片是224*224,一開始改的resize但還是報(bào)錯(cuò),于是改
transforms.CenterCrop((224, 224))
然后data[0]要改成item()
然后運(yùn)行成功了(好慢。。)

class VGG16(nn.Module):def __init__(self, nums=2):super(VGG16, self).__init__()self.nums = numsvgg = []# 第一個(gè)卷積部分# 112, 112, 64vgg.append(nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第二個(gè)卷積部分# 56, 56, 128vgg.append(nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第三個(gè)卷積部分# 28, 28, 256vgg.append(nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第四個(gè)卷積部分# 14, 14, 512vgg.append(nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 第五個(gè)卷積部分# 7, 7, 512vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1))vgg.append(nn.ReLU())vgg.append(nn.MaxPool2d(kernel_size=2, stride=2))# 將每一個(gè)模塊按照他們的順序送入到nn.Sequential中,輸入要么事orderdict,要么事一系列的模型,遇到上述的list,必須用*號進(jìn)行轉(zhuǎn)化self.main = nn.Sequential(*vgg)# 全連接層classfication = []# in_features四維張量變成二維[batch_size,channels,width,height]變成[batch_size,channels*width*height]classfication.append(nn.Linear(in_features=512 * 7 * 7, out_features=4096)) # 輸出4096個(gè)神經(jīng)元,參數(shù)變成512*7*7*4096+bias(4096)個(gè)classfication.append(nn.ReLU())classfication.append(nn.Dropout(p=0.5))classfication.append(nn.Linear(in_features=4096, out_features=4096))classfication.append(nn.ReLU())classfication.append(nn.Dropout(p=0.5))classfication.append(nn.Linear(in_features=4096, out_features=self.nums))self.classfication = nn.Sequential(*classfication)def forward(self, x):feature = self.main(x) # 輸入張量xfeature = feature.view(x.size(0), -1) # reshape x變成[batch_size,channels*width*height]#feature = feature.view(-1,116224)result = self.classfication(feature)return resultnet = Net() # 初始化一個(gè)卷積神經(jīng)網(wǎng)絡(luò)leNet- train_data = MyDataset(txt=root + 'num.txt', transform=transforms.Compose([transforms.RandomHorizontalFlip(), # 水平翻轉(zhuǎn) transforms.Resize((224, 224)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小 transforms.CenterCrop((224, 224)),transforms.ToTensor()])) test_data = MyDataset(txt=root+'test.txt', transform=transforms.Compose([transforms.Resize((32, 32)),# 將圖片縮放到指定大小(h,w)或者保持長寬比并縮放最短的邊到int大小transforms.CenterCrop(32),transforms.ToTensor()]),test=True) train_loader = DataLoader(dataset=train_data, batch_size=16, shuffle=True,drop_last=True) # batch_size:從樣本中取多少張,每一次epoch都會輸入batch_size張 print('num_of_trainData:', len(train_data)) test_loader = DataLoader(dataset=test_data, batch_size=19, shuffle=False)if __name__ == '__main__':# trainandsave()vgg = VGG16()#vgg = VGG16(2)optimizer = optim.SGD(vgg.parameters(), lr=0.001, momentum=0.9) # 學(xué)習(xí)率為0.001criterion = nn.CrossEntropyLoss() # 損失函數(shù)也可以自己定義,我們這里用的交叉熵?fù)p失函數(shù)# 訓(xùn)練部分for epoch in range(10): # 訓(xùn)練的數(shù)據(jù)量為10個(gè)epoch,每個(gè)epoch為一個(gè)循環(huán)# 每個(gè)epoch要訓(xùn)練所有的圖片,每訓(xùn)練完成200張便打印一下訓(xùn)練的效果(loss值)running_loss = 0.0 # 定義一個(gè)變量方便我們對loss進(jìn)行輸出train_loss = 0.train_acc = 0.for i, data in enumerate(train_loader, 0): # 這里我們遇到了第一步中出現(xiàn)的trailoader,代碼傳入數(shù)據(jù)# enumerate是python的內(nèi)置函數(shù),既獲得索引也獲得數(shù)據(jù)# get the inputsinputs, labels = data # data是從enumerate返回的data,包含數(shù)據(jù)和標(biāo)簽信息,分別賦值給inputs和labels# wrap them in Variableinputs, labels = Variable(inputs), Variable(labels) # # 轉(zhuǎn)換數(shù)據(jù)格式用Variableoptimizer.zero_grad() # 梯度置零,因?yàn)榉聪騻鞑ミ^程中梯度會累加上一次循環(huán)的梯度# forward + backward + optimizeoutputs = vgg(inputs) # 把數(shù)據(jù)輸進(jìn)CNN網(wǎng)絡(luò)netloss = criterion(outputs, labels) # 計(jì)算損失值train_loss += loss.item()pred = torch.max(outputs, 1)[1]train_correct = (pred == labels).sum()train_acc += train_correct.item()loss.backward() # loss反向傳播optimizer.step() # 反向傳播后參數(shù)更新running_loss += loss.item() # loss累加print('Train Loss: {:.6f}, Acc: {:.6f}'.format(train_loss / (len(train_data)), train_acc / (len(train_data))))print('Finished Training')# 保存神經(jīng)網(wǎng)絡(luò)torch.save(net, 'net.pkl')# 保存整個(gè)神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)和模型參數(shù)torch.save(net.state_dict(), 'net_params.pkl')

嘗試將損失函數(shù)修改:
optimizer = optim.Adam(vgg.parameters(), lr=1e-6) # 學(xué)習(xí)率為0.001

結(jié)果仍然不理想。懷疑是數(shù)據(jù)集有過大誤差。

載入已有模型進(jìn)行參數(shù)優(yōu)化

兩個(gè)主要的遷移學(xué)習(xí)場景:
Finetuning the convnet: 我們使用預(yù)訓(xùn)練網(wǎng)絡(luò)初始化網(wǎng)絡(luò),而不是隨機(jī)初始化,就像在imagenet 1000數(shù)據(jù)集上訓(xùn)練的網(wǎng)絡(luò)一樣。其余訓(xùn)練看起來像往常一樣。(此微調(diào)過程對應(yīng)引用中所說的初始化)
ConvNet as fixed feature extractor: 在這里,我們將凍結(jié)除最終完全連接層之外的所有網(wǎng)絡(luò)的權(quán)重。最后一個(gè)全連接層被替換為具有隨機(jī)權(quán)重的新層,并且僅訓(xùn)練該層。(此步對應(yīng)引
用中的固定特征提取器

用加載預(yù)訓(xùn)練模型并重置最終的全連接層的方法進(jìn)行訓(xùn)練。
每一個(gè)epoch都進(jìn)行訓(xùn)練和測試。我寫的是resnet18(先自己網(wǎng)上下載pkl文件,在pycharm里面下載太慢)

目前的全部代碼:

from __future__ import print_function, divisionimport torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets, models, transforms import matplotlib.pyplot as plt import time import os import copyimport torchvision.models as modelsdata_transforms = { 'train': transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),'val': transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])]),}data_dir =os.getcwd() + '\\data\\' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),data_transforms[x]) for x in ['train', 'val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,shuffle=True) for x in ['train', 'val']} dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']} device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") def imshow(inp, title=None):"""Imshow for Tensor."""inp = inp.numpy().transpose((1, 2, 0))mean = np.array([0.485, 0.456, 0.406])std = np.array([0.229, 0.224, 0.225])inp = std * inp + meaninp = np.clip(inp, 0, 1)plt.imshow(inp)if title is not None:plt.title(title)plt.pause(0.001) # pause a bit so that plots are updateddef train_model(model, criterion, optimizer, scheduler, num_epochs=25):since = time.time()best_model_wts = copy.deepcopy(model.state_dict())best_acc = 0.0for epoch in range(num_epochs):print('Epoch {}/{}'.format(epoch, num_epochs - 1))print('-' * 10)# Each epoch has a training and validation phasefor phase in ['train', 'val']:if phase == 'train':scheduler.step()model.train() # Set model to training moderunning_loss = 0.0running_corrects = 0# Iterate over data.for inputs, labels in dataloaders[phase]:# zero the parameter gradientsoptimizer.zero_grad()# track history if only in trainwith torch.set_grad_enabled(phase == 'train'):outputs = model(inputs)_,preds = torch.max(outputs, 1)loss = criterion(outputs, labels)if phase == 'train':# backward + optimize only if in training phaseloss.backward()optimizer.step()running_loss += loss.item() * inputs.size(0)running_corrects += torch.sum(preds == labels.data)epoch_loss = running_loss / dataset_sizes[phase]epoch_acc = running_corrects.double() / dataset_sizes[phase]print('{} Loss: {:.4f} Acc: {:.4f}'.format(phase, epoch_loss, epoch_acc)) # deep copy the modelif phase == 'val' and epoch_acc > best_acc:best_acc = epoch_accbest_model_wts = copy.deepcopy(model.state_dict())print()time_elapsed = time.time() - sinceprint('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))print('Best val Acc: {:4f}'.format(best_acc))# load best model weightsmodel.load_state_dict(best_model_wts)return modelclass_names = image_datasets['train'].classes def visualize_model(model, num_images=6):was_training = model.trainingmodel.eval()images_so_far = 0fig = plt.figure()with torch.no_grad():for i, (inputs, labels) in enumerate(dataloaders['val']):outputs = model(inputs)_, preds = torch.max(outputs, 1)for j in range(inputs.size()[0]):images_so_far += 1ax = plt.subplot(num_images // 2, 2, images_so_far)ax.axis('off')ax.set_title('predicted: {}'.format(class_names[preds[j]]))imshow(inputs.cpu().data[j])if images_so_far == num_images:model.train(mode=was_training)returnmodel.train(mode=was_training)model_ft = models.resnet18(pretrained=False) pthfile = r'C:\Users\14172\PycharmProjects\pythonProject4\resnet18-5c106cde.pth' model_ft.load_state_dict(torch.load(pthfile))#model_ft = models.vgg16(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = nn.Linear(num_ftrs, 2) criterion = nn.CrossEntropyLoss() # Observe that all parameters are being optimized optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)# Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1) model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,num_epochs=25) # 保存神經(jīng)網(wǎng)絡(luò) torch.save(model_ft, 'modefresnet.pkl') # 保存整個(gè)神經(jīng)網(wǎng)絡(luò)的結(jié)構(gòu)和模型參數(shù) torch.save(model_ft.state_dict(), 'modelresnet_params.pkl') visualize_model(model_ft)


兩者差距還是比較大,后期再進(jìn)行調(diào)整。先記錄。

與50位技術(shù)專家面對面20年技術(shù)見證,附贈技術(shù)全景圖

總結(jié)

以上是生活随笔為你收集整理的【pytorch】pytorch自定义训练vgg16和测试数据集 微调resnet18全连接层的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。