日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

PyTorch学习笔记(四)Logistic回归

發(fā)布時間:2024/5/14 编程问答 30 豆豆
生活随笔 收集整理的這篇文章主要介紹了 PyTorch学习笔记(四)Logistic回归 小編覺得挺不錯的,現(xiàn)在分享給大家,幫大家做個參考.
# 導入相關庫函數(shù) import numpy as np import torch import torch.nn as nn import torchvision import torchvision.transforms as transforms # 超參數(shù) input_size =28*28 num_classes =10 num_epochs = 10 batch_size = 100 learning_rate = 0.001 # MNIST數(shù)據(jù)集 train_dataset = torchvision.datasets.MNIST(root='../../data',train=True,transform=transforms.ToTensor(),download=True) test_dataset = torchvision.datasets.MNIST(root='../../data',train=False,transform=transforms.ToTensor()) # Data loader train_loader = torch.utils.data.DataLoader(dataset=train_dataset,batch_size=batch_size,shuffle=True) test_loader = torch.utils.data.DataLoader(dataset=test_dataset,batch_size=batch_size,shuffle=False) # Logistic 回歸模型 model = nn.Linear(input_size,num_classes)# 損失函數(shù)和優(yōu)化器 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate) # 訓練模型 total_step = len(train_loader) for epoch in range(num_epochs):for i,(images,labels) in enumerate(train_loader):# 轉換圖像格式images = images.reshape(-1,input_size)# 前向傳播outputs = model(images)loss = criterion(outputs,labels)# 后向傳播并優(yōu)化optimizer.zero_grad()loss.backward()optimizer.step()if (i+1) % 100 == 0:print('Epoch [{}/{}], Step [{}/{}], Loss:{:.4f}'.format(epoch+1,num_epochs,i+1,total_step,loss.item())) Epoch [1/10], Step [100/600], Loss:2.1877 Epoch [1/10], Step [200/600], Loss:2.0952 Epoch [1/10], Step [300/600], Loss:2.0192 Epoch [1/10], Step [400/600], Loss:1.8990 Epoch [1/10], Step [500/600], Loss:1.8696 Epoch [1/10], Step [600/600], Loss:1.7758 Epoch [2/10], Step [100/600], Loss:1.6857 Epoch [2/10], Step [200/600], Loss:1.6319 Epoch [2/10], Step [300/600], Loss:1.6608 Epoch [2/10], Step [400/600], Loss:1.5398 Epoch [2/10], Step [500/600], Loss:1.4866 Epoch [2/10], Step [600/600], Loss:1.5251 Epoch [3/10], Step [100/600], Loss:1.3724 Epoch [3/10], Step [200/600], Loss:1.3550 Epoch [3/10], Step [300/600], Loss:1.4124 Epoch [3/10], Step [400/600], Loss:1.3413 Epoch [3/10], Step [500/600], Loss:1.1988 Epoch [3/10], Step [600/600], Loss:1.2809 Epoch [4/10], Step [100/600], Loss:1.2419 Epoch [4/10], Step [200/600], Loss:1.1801 Epoch [4/10], Step [300/600], Loss:1.2190 Epoch [4/10], Step [400/600], Loss:1.1891 Epoch [4/10], Step [500/600], Loss:1.1165 Epoch [4/10], Step [600/600], Loss:1.1057 Epoch [5/10], Step [100/600], Loss:0.9912 Epoch [5/10], Step [200/600], Loss:1.1478 Epoch [5/10], Step [300/600], Loss:0.9327 Epoch [5/10], Step [400/600], Loss:0.9662 Epoch [5/10], Step [500/600], Loss:0.8517 Epoch [5/10], Step [600/600], Loss:0.9587 Epoch [6/10], Step [100/600], Loss:0.9835 Epoch [6/10], Step [200/600], Loss:0.9946 Epoch [6/10], Step [300/600], Loss:0.8951 Epoch [6/10], Step [400/600], Loss:0.9013 Epoch [6/10], Step [500/600], Loss:0.9931 Epoch [6/10], Step [600/600], Loss:0.8686 Epoch [7/10], Step [100/600], Loss:0.9099 Epoch [7/10], Step [200/600], Loss:0.8394 Epoch [7/10], Step [300/600], Loss:0.9348 Epoch [7/10], Step [400/600], Loss:0.7706 Epoch [7/10], Step [500/600], Loss:1.0090 Epoch [7/10], Step [600/600], Loss:0.8198 Epoch [8/10], Step [100/600], Loss:0.8123 Epoch [8/10], Step [200/600], Loss:0.8300 Epoch [8/10], Step [300/600], Loss:0.8192 Epoch [8/10], Step [400/600], Loss:0.7725 Epoch [8/10], Step [500/600], Loss:0.8281 Epoch [8/10], Step [600/600], Loss:0.8399 Epoch [9/10], Step [100/600], Loss:0.8856 Epoch [9/10], Step [200/600], Loss:0.7889 Epoch [9/10], Step [300/600], Loss:0.7205 Epoch [9/10], Step [400/600], Loss:0.7489 Epoch [9/10], Step [500/600], Loss:0.8840 Epoch [9/10], Step [600/600], Loss:0.6865 Epoch [10/10], Step [100/600], Loss:0.7590 Epoch [10/10], Step [200/600], Loss:0.7337 Epoch [10/10], Step [300/600], Loss:0.7751 Epoch [10/10], Step [400/600], Loss:0.6656 Epoch [10/10], Step [500/600], Loss:0.6606 Epoch [10/10], Step [600/600], Loss:0.7351 # 測試模型 # 測試階段,不需要計算梯度 with torch.no_grad():correct = 0total = 0for images,labels in test_loader:images = images.reshape(-1,input_size)outputs = model(images)_,predicted = torch.max(outputs.data,1)total += labels.size(0)correct += (predicted==labels).sum()print('Accuracy of the model on the 10000 test images:{} %'.format(100*correct/total)) Accuracy of the model on the 10000 test images:85.55999755859375 % # 保存模型 torch.save(model.state_dict(),'model.ckpt')

總結

以上是生活随笔為你收集整理的PyTorch学习笔记(四)Logistic回归的全部內(nèi)容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網(wǎng)站內(nèi)容還不錯,歡迎將生活随笔推薦給好友。