日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當(dāng)前位置: 首頁 > 人工智能 > pytorch >内容正文

pytorch

深度学习之pytorch(二) 数据并行

發(fā)布時間:2023/12/10 pytorch 26 豆豆
生活随笔 收集整理的這篇文章主要介紹了 深度学习之pytorch(二) 数据并行 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.

又是好久沒更新博客,最近瑣事纏身,寫文檔寫到吐。沒時間學(xué)習(xí)新的知識,剛空閑下來立刻就學(xué)習(xí)之前忘得差不多得Pytorch。Pytorch和tensorflow差不多,具體得就不多啰嗦了,覺得還有疑問的童鞋可以自行去官網(wǎng)學(xué)習(xí),官網(wǎng)網(wǎng)址包括:

1.https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html

2.https://pytorch-cn.readthedocs.io/zh/latest/package_references/torch-nn/#class-torchnndataparallelmodule-device_idsnone-output_devicenone-dim0source

?

個人覺得值得一說的是關(guān)于數(shù)據(jù)并行這部分:

根據(jù)官網(wǎng)的教程代碼總結(jié)如下所示:

# -*- coding: utf-8 -*- """ Created on Tue Apr 16 14:32:58 2019@author: kofzh """import torch import torch.nn as nn from torch.utils.data import Dataset, DataLoader# Parameters and DataLoaders input_size = 5 output_size = 2batch_size = 30 data_size = 100device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")class RandomDataset(Dataset):def __init__(self, size, length):self.len = lengthself.data = torch.randn(length, size)def __getitem__(self, index):return self.data[index]def __len__(self):return self.lenrand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),batch_size=batch_size, shuffle=True)class Model(nn.Module):# Our modeldef __init__(self, input_size, output_size):super(Model, self).__init__()self.fc = nn.Linear(input_size, output_size)def forward(self, input):output = self.fc(input)print("\tIn Model: input size", input.size(),"output size", output.size())return outputmodel = Model(input_size, output_size) if torch.cuda.device_count() > 1:print("Let's use", torch.cuda.device_count(), "GPUs!")# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUsmodel = nn.DataParallel(model)model.to(device) for data in rand_loader:input = data.to(device)output = model(input)print("Outside: input size", input.size(),"output_size", output.size())

對于上述代碼分別是 batchsize= 30在CPU和4路GPU環(huán)境下運行得出的結(jié)果:

CPU with batchsize= 30結(jié)果:

In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([30, 5]) output size torch.Size([30, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2]) Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])

4路?GPU with batchsize= 30?結(jié)果:

Let's use 4 GPUs!In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([6, 5]) output size torch.Size([6, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input sizetorch.Size([8, 5]) In Model: input size torch.Size([8, 5])output size In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])torch.Size([6, 5]) output size torch.Size([6, 2])output size torch.Size([8, 2])torch.Size([8, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2]) torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([8, 5]) output size torch.Size([8, 2])In Model: input size torch.Size([6, 5]) output size torch.Size([6, 2]) Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])In Model: input size In Model: input sizetorch.Size([3, 5]) In Model: input size torch.Size([3, 5]) output size torch.Size([3, 2])torch.Size([3, 5]) output size torch.Size([3, 2])output size torch.Size([3, 2])In Model: input size torch.Size([1, 5]) output size torch.Size([1, 2]) Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])

對于上述代碼分別是 batchsize= 25在CPU和4路GPU環(huán)境下運行得出的結(jié)果:

CPU with batchsize= 25結(jié)果:

In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([25, 5]) output size torch.Size([25, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])

4路?GPU with batchsize= 25?結(jié)果:

Let's use 4 GPUs!In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([4, 5]) output size torch.Size([4, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([4, 5]) output size torch.Size([4, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2]) torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2]) Outside: input size torch.Size([25, 5]) output_size torch.Size([25, 2])In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size In Model: input size torch.Size([7, 5]) output size torch.Size([7, 2])torch.Size([7, 5]) output size torch.Size([7, 2])In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])

個人總結(jié):batchsize在實際訓(xùn)練時對于每個GPU的分配基本上采用的是平均分配數(shù)據(jù)的原則,每個GPU分配的數(shù)據(jù)相加 =?batchsize。

備注:

1.官網(wǎng)上的CPU結(jié)果圖是錯誤的,可能是圖截錯了,在此對于某些博客直接抄襲官網(wǎng)的結(jié)果圖表示鄙視。

2.本文針對單服務(wù)器多GPU的情況,請不要混淆。

如有錯誤,敬請指正!

創(chuàng)作挑戰(zhàn)賽新人創(chuàng)作獎勵來咯,堅持創(chuàng)作打卡瓜分現(xiàn)金大獎

總結(jié)

以上是生活随笔為你收集整理的深度学习之pytorch(二) 数据并行的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。