日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問(wèn) 生活随笔!

生活随笔

當(dāng)前位置: 首頁(yè) > 编程资源 > 编程问答 >内容正文

编程问答

pytorch中resnet_ResNet代码详解

發(fā)布時(shí)間:2024/4/11 编程问答 27 豆豆
生活随笔 收集整理的這篇文章主要介紹了 pytorch中resnet_ResNet代码详解 小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.

代碼學(xué)習(xí)第一天! fighting!

import torch.nn as nn import math import torch.utils.model_zoo as model_zoo# 這個(gè)文件內(nèi)包括6中不同的網(wǎng)絡(luò)架構(gòu) __all__ = ['ResNet', 'resnet18', 'resnet34', 'resnet50', 'resnet101','resnet152']# 每一種架構(gòu)下都有訓(xùn)練好的可以用的參數(shù)文件 model_urls = {'resnet18': 'https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth','resnet34': 'https://s3.amazonaws.com/pytorch/models/resnet34-333f7ec4.pth','resnet50': 'https://s3.amazonaws.com/pytorch/models/resnet50-19c8e357.pth','resnet101': 'https://s3.amazonaws.com/pytorch/models/resnet101-5d3b4d8f.pth','resnet152': 'https://s3.amazonaws.com/pytorch/models/resnet152-b121ed2d.pth', }# 常見(jiàn)的3x3卷積 def conv3x3(in_planes, out_planes, stride=1):"3x3 convolution with padding"return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,padding=1, bias=False)# 這是殘差網(wǎng)絡(luò)中的basicblock,實(shí)現(xiàn)的功能如下方解釋: class BasicBlock(nn.Module):expansion = 1def __init__(self, inplanes, planes, stride=1, downsample=None): # inplanes代表輸入通道數(shù),planes代表輸出通道數(shù)。super(BasicBlock, self).__init__()self.conv1 = conv3x3(inplanes, planes, stride)self.bn1 = nn.BatchNorm2d(planes)self.relu = nn.ReLU(inplace=True)self.conv2 = conv3x3(planes, planes)self.bn2 = nn.BatchNorm2d(planes)self.downsample = downsampleself.stride = stridedef forward(self, x):residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)if self.downsample is not None:residual = self.downsample(x)out += residualout = self.relu(out)return out

1.BasicBlock類中的init()函數(shù)是先定義網(wǎng)絡(luò)架構(gòu),forward()的函數(shù)是前向傳播,實(shí)現(xiàn)的功能就是殘差塊,如下圖所示:

2.Bottleneck類是另一種blcok類型,同上,init()函數(shù)是預(yù)定義網(wǎng)絡(luò)架構(gòu),forward函數(shù)是進(jìn)行前向傳播。該block中有三個(gè)卷積,分別是1x1,3x3,1x1,分別完成的功能就是維度壓縮,卷積,恢復(fù)維度!故bottleneck實(shí)現(xiàn)的功能就是對(duì)通道數(shù)進(jìn)行壓縮,再放大。注意:這里的plane不再是輸出的通道數(shù),輸出通道數(shù)應(yīng)該就是plane*expansion,即4*plane。

class Bottleneck(nn.Module):expansion = 4 # 輸出通道數(shù)的倍乘def __init__(self, inplanes, planes, stride=1, downsample=None):super(Bottleneck, self).__init__()self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False)self.bn1 = nn.BatchNorm2d(planes)self.conv2 = nn.Conv2d(planes, planes, kernel_size=3, stride=stride,padding=1, bias=False)self.bn2 = nn.BatchNorm2d(planes)self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False)self.bn3 = nn.BatchNorm2d(planes * 4)self.relu = nn.ReLU(inplace=True)self.downsample = downsampleself.stride = stridedef forward(self, x):residual = xout = self.conv1(x)out = self.bn1(out)out = self.relu(out)out = self.conv2(out)out = self.bn2(out)out = self.relu(out)out = self.conv3(out)out = self.bn3(out)if self.downsample is not None:residual = self.downsample(x)out += residualout = self.relu(out)return out

這兩個(gè)class講清楚的話,后面的網(wǎng)絡(luò)主體架構(gòu)就還蠻好理解的了,6中架構(gòu)之間的不同在于basicblock和bottlenek之間的不同以及block的輸入?yún)?shù)的不同。因?yàn)镽esNet一般有4個(gè)stack,每一個(gè)stack里面都是block的堆疊,所以[3, 4, 6, 3]就是每一個(gè)stack里面堆疊block的個(gè)數(shù),故而造就了不同深度的ResNet。

resnet18: ResNet(BasicBlock, [2, 2, 2, 2])

resnet34: ResNet(BasicBlock, [3, 4, 6, 3])

resnet50:ResNet(Bottleneck, [3, 4, 6, 3])

resnet101:ResNet(Bottleneck, [3, 4, 23, 3])

resnet152:ResNet(Bottleneck, [3, 8, 36, 3])

def resnet18(pretrained=False):"""Constructs a ResNet-18 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(BasicBlock, [2, 2, 2, 2])if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet18']))return modeldef resnet34(pretrained=False):"""Constructs a ResNet-34 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(BasicBlock, [3, 4, 6, 3])if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet34']))return modeldef resnet50(pretrained=False):"""Constructs a ResNet-50 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 4, 6, 3])if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet50']))return modeldef resnet101(pretrained=False):"""Constructs a ResNet-101 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 4, 23, 3])if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet101']))return modeldef resnet152(pretrained=False):"""Constructs a ResNet-152 model.Args:pretrained (bool): If True, returns a model pre-trained on ImageNet"""model = ResNet(Bottleneck, [3, 8, 36, 3])if pretrained:model.load_state_dict(model_zoo.load_url(model_urls['resnet152']))return model

最后的ResNet類其實(shí)可以根據(jù)列表大小來(lái)構(gòu)建不同深度的resnet網(wǎng)絡(luò)架構(gòu)。resnet一共有5個(gè)階段,第一階段是一個(gè)7x7的卷積,stride=2,然后再經(jīng)過(guò)池化層,得到的特征圖大小變?yōu)樵瓐D的1/4。_make_layer()函數(shù)用來(lái)產(chǎn)生4個(gè)layer,可以根據(jù)輸入的layers列表來(lái)創(chuàng)建網(wǎng)絡(luò)。

class ResNet(nn.Module):def __init__(self, block, layers, num_classes=1000): # layers=參數(shù)列表 block選擇不同的類self.inplanes = 64 super(ResNet, self).__init__()self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3,bias=False)self.bn1 = nn.BatchNorm2d(64)self.relu = nn.ReLU(inplace=True)self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.layer1 = self._make_layer(block, 64, layers[0])self.layer2 = self._make_layer(block, 128, layers[1], stride=2)self.layer3 = self._make_layer(block, 256, layers[2], stride=2)self.layer4 = self._make_layer(block, 512, layers[3], stride=2)self.avgpool = nn.AvgPool2d(7)self.fc = nn.Linear(512 * block.expansion, num_classes)for m in self.modules():if isinstance(m, nn.Conv2d):n = m.kernel_size[0] * m.kernel_size[1] * m.out_channelsm.weight.data.normal_(0, math.sqrt(2. / n))elif isinstance(m, nn.BatchNorm2d):m.weight.data.fill_(1)m.bias.data.zero_()def _make_layer(self, block, planes, blocks, stride=1):downsample = Noneif stride != 1 or self.inplanes != planes * block.expansion:downsample = nn.Sequential(nn.Conv2d(self.inplanes, planes * block.expansion,kernel_size=1, stride=stride, bias=False),nn.BatchNorm2d(planes * block.expansion),)layers = []layers.append(block(self.inplanes, planes, stride, downsample)) # 每個(gè)blocks的第一個(gè)residual結(jié)構(gòu)保存在layers列表中。self.inplanes = planes * block.expansionfor i in range(1, blocks):layers.append(block(self.inplanes, planes)) #該部分是將每個(gè)blocks的剩下residual 結(jié)構(gòu)保存在layers列表中,這樣就完成了一個(gè)blocks的構(gòu)造。return nn.Sequential(*layers)def forward(self, x):x = self.conv1(x)x = self.bn1(x)x = self.relu(x)x = self.maxpool(x)x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)x = self.avgpool(x)x = x.view(x.size(0), -1) # 將輸出結(jié)果展成一行x = self.fc(x)return x

下面我將展示resnet18的部分結(jié)構(gòu):

如上圖所示:先經(jīng)過(guò)一個(gè)7x7的卷積,然后送入(layer1),里面包括兩個(gè)basicblock,每一個(gè)basicblock里面都是兩個(gè)3x3的卷積,下面再接相同類型的layer2,3,4。之后再接一個(gè)平均池化層和全連接層就完成了resnet-18的整個(gè)架構(gòu)。

其他結(jié)構(gòu)依舊可以調(diào)用上面的函數(shù)進(jìn)行查詢。

2019-8-16更新完畢!

總結(jié)

以上是生活随笔為你收集整理的pytorch中resnet_ResNet代码详解的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。

如果覺(jué)得生活随笔網(wǎng)站內(nèi)容還不錯(cuò),歡迎將生活随笔推薦給好友。