日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

4.2 使用pytorch搭建VGG网络

發(fā)布時間:2024/10/8 编程问答 29 豆豆
生活随笔 收集整理的這篇文章主要介紹了 4.2 使用pytorch搭建VGG网络 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

文章目錄

  • 將VGG分成兩部分
    • 提取特征網(wǎng)絡結構
    • 分類網(wǎng)絡結構
  • model
    • 輸入:非關鍵字參數(shù)或有序字典
      • P[ython-非關鍵字參數(shù)和關鍵字參數(shù)(*args **kw)](https://blog.csdn.net/weixin_44023658/article/details/105925199?utm_medium=distribute.wap_relevant.none-task-blog-title-1)
  • predict
    • 很多人會在RGB減去這三個值,是IMAGENET的三個通道上的均值,遷移學習可能要減
  • train

將VGG分成兩部分

另外這個網(wǎng)絡很大,跑得很慢,數(shù)據(jù)要求大

提取特征網(wǎng)絡結構

分類網(wǎng)絡結構

model

import torch.nn as nn import torchclass VGG(nn.Module):def __init__(self, features, num_classes=1000, init_weights=False):#features傳入super(VGG, self).__init__()self.features = featuresself.classifier = nn.Sequential(nn.Dropout(p=0.5),nn.Linear(512*7*7, 2048),nn.ReLU(True),nn.Dropout(p=0.5),nn.Linear(2048, 2048),nn.ReLU(True),nn.Linear(2048, num_classes))if init_weights:#還要判斷下是否需要初始化參數(shù),傳入的參數(shù)為true的話就初始化self._initialize_weights()def forward(self, x):# N x 3 x 224 x 224x = self.features(x)# N x 512 x 7 x 7x = torch.flatten(x, start_dim=1)#展平#start_dim從哪個維度開始進行展平處理,第0個維度是batch維度# N x 512*7*7x = self.classifier(x)return xdef _initialize_weights(self):#初始化權重函數(shù),遍歷每一層for m in self.modules():if isinstance(m, nn.Conv2d):#如果卷積層,就用xavier方法# nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')nn.init.xavier_uniform_(m.weight)if m.bias is not None:#如果采用了偏置就要把偏置全置0nn.init.constant_(m.bias, 0)elif isinstance(m, nn.Linear):#全連接層的話nn.init.xavier_uniform_(m.weight)# nn.init.normal_(m.weight, 0, 0.01)nn.init.constant_(m.bias, 0)def make_features(cfg: list):#傳入配置變量,只要傳入對應配置的列表就行layers = []in_channels = 3#RGBfor v in cfg:if v == "M":layers += [nn.MaxPool2d(kernel_size=2, stride=2)]#池化核的大小和步距都是2else:conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=1)#stride默認為1所以沒寫layers += [conv2d, nn.ReLU(True)]in_channels = v#輸出的深度變成V了return nn.Sequential(*layers)#將列表作為(非關鍵字參數(shù))輸入

輸入:非關鍵字參數(shù)或有序字典

Python-非關鍵字參數(shù)和關鍵字參數(shù)(*args **kw)


#模型配置文件 cfgs = {'vgg11': [64, 'M', 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],#A配置,數(shù)字代表卷積層個數(shù),M是池化層結構從(最大池化下采樣)'vgg13': [64, 64, 'M', 128, 128, 'M', 256, 256, 'M', 512, 512, 'M', 512, 512, 'M'],#B配置'vgg16': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512, 512, 512, 'M'],#D配置'vgg19': [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'],#E配置 }def vgg(model_name="vgg16", **kwargs):try:cfg = cfgs[model_name]except:print("Warning: model number {} not in cfgs dict!".format(model_name))exit(-1)model = VGG(make_features(cfg), **kwargs)#第一個參數(shù)是features,后面是關鍵字是可變長度的的字典變量(num_classes=1000, init_weights=False)return model

predict

import torch from model import vgg from PIL import Image from torchvision import transforms import matplotlib.pyplot as plt import jsondata_transform = transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])# load image img = Image.open("../tulip.jpg") plt.imshow(img) # [N, C, H, W] img = data_transform(img) # expand batch dimension img = torch.unsqueeze(img, dim=0)# read class_indict try:json_file = open('./class_indices.json', 'r')class_indict = json.load(json_file) except Exception as e:print(e)exit(-1)# create model model = vgg(model_name="vgg16", num_classes=5) # load model weights model_weight_path = "./vgg16Net.pth" model.load_state_dict(torch.load(model_weight_path)) model.eval() with torch.no_grad():# predict classoutput = torch.squeeze(model(img))predict = torch.softmax(output, dim=0)predict_cla = torch.argmax(predict).numpy() print(class_indict[str(predict_cla)]) plt.show()

很多人會在RGB減去這三個值,是IMAGENET的三個通道上的均值,遷移學習可能要減

train

import torch.nn as nn from torchvision import transforms, datasets import json import os import torch.optim as optim from model import vgg import torchdevice = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") print(device)data_transform = {"train": transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]),"val": transforms.Compose([transforms.Resize((224, 224)),transforms.ToTensor(),transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])}data_root = os.path.abspath(os.path.join(os.getcwd(), "../..")) # get data root path image_path = data_root + "/data_set/flower_data/" # flower data set pathtrain_dataset = datasets.ImageFolder(root=image_path+"train",transform=data_transform["train"]) train_num = len(train_dataset)# {'daisy':0, 'dandelion':1, 'roses':2, 'sunflower':3, 'tulips':4} flower_list = train_dataset.class_to_idx cla_dict = dict((val, key) for key, val in flower_list.items()) # write dict into json file json_str = json.dumps(cla_dict, indent=4) with open('class_indices.json', 'w') as json_file:json_file.write(json_str)batch_size = 32 train_loader = torch.utils.data.DataLoader(train_dataset,batch_size=batch_size, shuffle=True,num_workers=0)validate_dataset = datasets.ImageFolder(root=image_path + "val",transform=data_transform["val"]) val_num = len(validate_dataset) validate_loader = torch.utils.data.DataLoader(validate_dataset,batch_size=batch_size, shuffle=False,num_workers=0)# test_data_iter = iter(validate_loader) # test_image, test_label = test_data_iter.next()model_name = "vgg16"#取16 net = vgg(model_name=model_name, num_classes=5, init_weights=True) net.to(device) loss_function = nn.CrossEntropyLoss() optimizer = optim.Adam(net.parameters(), lr=0.0001)best_acc = 0.0 save_path = './{}Net.pth'.format(model_name) for epoch in range(30):# trainnet.train()running_loss = 0.0for step, data in enumerate(train_loader, start=0):images, labels = dataoptimizer.zero_grad()outputs = net(images.to(device))loss = loss_function(outputs, labels.to(device))loss.backward()optimizer.step()# print statisticsrunning_loss += loss.item()# print train processrate = (step + 1) / len(train_loader)a = "*" * int(rate * 50)b = "." * int((1 - rate) * 50)print("\rtrain loss: {:^3.0f}%[{}->{}]{:.3f}".format(int(rate * 100), a, b, loss), end="")print()# validatenet.eval()acc = 0.0 # accumulate accurate number / epochwith torch.no_grad():for val_data in validate_loader:val_images, val_labels = val_dataoptimizer.zero_grad()outputs = net(val_images.to(device))predict_y = torch.max(outputs, dim=1)[1]acc += (predict_y == val_labels.to(device)).sum().item()val_accurate = acc / val_numif val_accurate > best_acc:best_acc = val_accuratetorch.save(net.state_dict(), save_path)print('[epoch %d] train_loss: %.3f test_accuracy: %.3f' %(epoch + 1, running_loss / step, val_accurate))print('Finished Training')

總結

以上是生活随笔為你收集整理的4.2 使用pytorch搭建VGG网络的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。