日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

Pytorch 神经网络nn模块

發布時間:2024/7/5 编程问答 37 豆豆
生活随笔 收集整理的這篇文章主要介紹了 Pytorch 神经网络nn模块 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

文章目錄

    • 1. nn模塊
    • 2. torch.optim 優化器
    • 3. 自定義nn模塊
    • 4. 權重共享

參考 http://pytorch123.com/

1. nn模塊

import torch N, D_in, Hidden_size, D_out = 64, 1000, 100, 10
  • torch.nn.Sequential 建立模型,跟 keras 很像
x = torch.randn(N, D_in) y = torch.randn(N, D_out)model = torch.nn.Sequential(torch.nn.Linear(D_in, Hidden_size),torch.nn.ReLU(),torch.nn.Linear(Hidden_size, D_out) )# 損失函數 loss_fn = torch.nn.MSELoss(reduction='sum') learning_rate = 1e-4 loss_list = []for t in range(500):y_pred = model(x) # 前向傳播loss = loss_fn(y_pred, y) # 損失loss_list.append(loss.item())print(t, loss.item())model.zero_grad() # 清零梯度loss.backward() # 反向傳播,計算梯度with torch.no_grad(): # 更新參數,不計入網絡圖的操作當中for param in model.parameters():param -= learning_rate*param.grad # 更新參數 # 繪制損失 import pandas as pd loss_curve = pd.DataFrame(loss_list, columns=['loss']) loss_curve.plot()

2. torch.optim 優化器

  • torch.optim.Adam 使用優化器
  • optimizer.zero_grad() # 清零梯度
  • optimizer.step() # 更新參數
learning_rate = 1e-4optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)loss_list = [] for t in range(500):y_pred = model(x) # 前向傳播loss = loss_fn(y_pred, y) # 損失loss_list.append(loss.item())print(t, loss.item())optimizer.zero_grad() # 清零梯度loss.backward() # 反向傳播,計算梯度optimizer.step() # 更新參數

3. 自定義nn模塊

  • 繼承 nn.module,并定義 forward 前向傳播函數
import torch class myModel(torch.nn.Module):def __init__(self, D_in, Hidden_size, D_out):super(myModel, self).__init__()self.fc1 = torch.nn.Linear(D_in, Hidden_size)self.fc2 = torch.nn.Linear(Hidden_size, D_out)def forward(self, x):x = self.fc1(x).clamp(min=0) # clamp 修剪數據在 min - max 之間,relu的作用x = self.fc2(x)return x N, D_in, Hidden_size, D_out = 64, 1000, 100, 10 x = torch.randn(N, D_in) y = torch.randn(N, D_out)model = myModel(D_in, Hidden_size, D_out) # 自定義模型loss_fn = torch.nn.MSELoss(reduction='sum') optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)loss_val = []for t in range(500):y_pred = model(x)loss = loss_fn(y_pred, y)loss_val.append(loss.item())optimizer.zero_grad()loss.backward()optimizer.step()import pandas as pd loss_val = pd.DataFrame(loss_val, columns=['loss']) loss_val.plot()

4. 權重共享

  • 建立一個有3種FC層的玩具模型,中間 shareFC層會被 for 循環重復 0-3 次(隨機),這幾層(次數隨機)的參數是共享的
import random import torchclass shareParamsModel(torch.nn.Module):def __init__(self, D_in, Hidden_size, D_out):super(shareParamsModel, self).__init__()self.inputFC = torch.nn.Linear(D_in, Hidden_size)self.shareFC = torch.nn.Linear(Hidden_size, Hidden_size)self.outputFC = torch.nn.Linear(Hidden_size, D_out)self.sharelayers = 0 # 記錄隨機出了多少層def forward(self, x):x = self.inputFC(x).clamp(min=0)self.sharelayers = 0for _ in range(random.randint(0, 3)):x = self.shareFC(x).clamp(min=0)self.sharelayers += 1x = self.outputFC(x)return x N, D_in, Hidden_size, D_out = 64, 1000, 100, 10 x = torch.randn(N, D_in) y = torch.randn(N, D_out)model = shareParamsModel(D_in, Hidden_size, D_out)loss_fn = torch.nn.MSELoss(reduction='sum')optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)loss_val = []for t in range(500):y_pred = model(x)print('share layers: ', model.sharelayers)loss = loss_fn(y_pred, y)loss_val.append(loss.item())optimizer.zero_grad()loss.backward()optimizer.step()for p in model.parameters():print(p.size())import pandas as pd loss_val = pd.DataFrame(loss_val, columns=['loss']) loss_val.plot()

輸出:

share layers: 1 share layers: 0 share layers: 2 share layers: 1 share layers: 2 share layers: 1 share layers: 0 share layers: 1 share layers: 0 share layers: 0 share layers: 3 share layers: 3 。。。省略

參數數量,多次運行,均為以下結果

torch.Size([100, 1000]) torch.Size([100]) torch.Size([100, 100]) torch.Size([100]) torch.Size([10, 100]) torch.Size([10])

總結

以上是生活随笔為你收集整理的Pytorch 神经网络nn模块的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。