深度学习之pytorch(二) 数据并行
又是好久沒更新博客,最近瑣事纏身,寫文檔寫到吐。沒時間學(xué)習(xí)新的知識,剛空閑下來立刻就學(xué)習(xí)之前忘得差不多得Pytorch。Pytorch和tensorflow差不多,具體得就不多啰嗦了,覺得還有疑問的童鞋可以自行去官網(wǎng)學(xué)習(xí),官網(wǎng)網(wǎng)址包括:
1.https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
2.https://pytorch-cn.readthedocs.io/zh/latest/package_references/torch-nn/#class-torchnndataparallelmodule-device_idsnone-output_devicenone-dim0source
?
個人覺得值得一說的是關(guān)于數(shù)據(jù)并行這部分:
根據(jù)官網(wǎng)的教程代碼總結(jié)如下所示:
# -*- coding: utf-8 -*- """ Created on Tue Apr 16 14:32:58 2019@author: kofzh """import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader# Parameters and DataLoaders input_size = 5 output_size = 2batch_size = 30 data_size = 100device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")class RandomDataset(Dataset):def __init__(self, size, length):self.len = lengthself.data = torch.randn(length, size)def __getitem__(self, index):return self.data[index]def __len__(self):return self.lenrand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),batch_size=batch_size, shuffle=True)class Model(nn.Module):# Our modeldef __init__(self, input_size, output_size):super(Model, self).__init__()self.fc = nn.Linear(input_size, output_size)def forward(self, input):output = self.fc(input)print("\tIn Model: input size", input.size(),"output size", output.size())return outputmodel = Model(input_size, output_size) if torch.cuda.device_count() > 1:print("Let's use", torch.cuda.device_count(), "GPUs!")# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUsmodel = nn.DataParallel(model)model.to(device) for data in rand_loader:input = data.to(device)output = model(input)print("Outside: input size", input.size(),"output_size", output.size())對于上述代碼分別是 batchsize= 30在CPU和4路GPU環(huán)境下運行得出的結(jié)果:
CPU with batchsize= 30結(jié)果:
In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2]) Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])4路?GPU with batchsize= 30?結(jié)果:
Let's use 4 GPUs!In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([6, 5]) output size torch.Size([6, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input sizetorch.Size([8, 5]) In Model: input size torch.Size([8, 5])output size In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])torch.Size([6, 5]) output size torch.Size([6, 2])output size torch.Size([8, 2])torch.Size([8, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2]) torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([6, 5]) output size torch.Size([6, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input sizetorch.Size([3, 5]) In Model: input size torch.Size([3, 5]) output size torch.Size([3, 2])torch.Size([3, 5]) output size torch.Size([3, 2])output size torch.Size([3, 2])In Model: input size torch.Size([1, 5]) output size torch.Size([1, 2]) Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])對于上述代碼分別是 batchsize= 25在CPU和4路GPU環(huán)境下運行得出的結(jié)果:
CPU with batchsize= 25結(jié)果:
In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])4路?GPU with batchsize= 25?結(jié)果:
Let's use 4 GPUs!In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([4, 5]) output size torch.Size([4, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([4, 5]) output size torch.Size([4, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2]) torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])個人總結(jié):batchsize在實際訓(xùn)練時對于每個GPU的分配基本上采用的是平均分配數(shù)據(jù)的原則,每個GPU分配的數(shù)據(jù)相加 =?batchsize。
備注:
1.官網(wǎng)上的CPU結(jié)果圖是錯誤的,可能是圖截錯了,在此對于某些博客直接抄襲官網(wǎng)的結(jié)果圖表示鄙視。
2.本文針對單服務(wù)器多GPU的情況,請不要混淆。
如有錯誤,敬請指正!
創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎勵來咯,堅持創(chuàng)作打卡瓜分現(xiàn)金大獎總結(jié)
以上是生活随笔為你收集整理的深度学习之pytorch(二) 数据并行的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: 冒泡排序 和 归并排序
- 下一篇: 深度学习之keras (一) 初探