日韩性视频-久久久蜜桃-www中文字幕-在线中文字幕av-亚洲欧美一区二区三区四区-撸久久-香蕉视频一区-久久无码精品丰满人妻-国产高潮av-激情福利社-日韩av网址大全-国产精品久久999-日本五十路在线-性欧美在线-久久99精品波多结衣一区-男女午夜免费视频-黑人极品ⅴideos精品欧美棵-人人妻人人澡人人爽精品欧美一区-日韩一区在线看-欧美a级在线免费观看

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

PyTorch框架学习二十——模型微调(Finetune)

發布時間:2024/7/23 编程问答 36 豆豆
生活随笔 收集整理的這篇文章主要介紹了 PyTorch框架学习二十——模型微调(Finetune) 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

PyTorch框架學習二十——模型微調(Finetune)

  • 一、Transfer Learning:遷移學習
  • 二、Model Finetune:模型的遷移學習
  • 三、看個例子:用ResNet18預訓練模型訓練一個圖片二分類任務

因為模型微調的內容沒有實際使用過,但是后面是肯定會要了解的,所以這里算是一個引子,簡單從概念上介紹一下遷移學習與模型微調,后面有時間或需要用到時再去詳細了解。

一、Transfer Learning:遷移學習

是機器學習(ML)的一項分支,主要研究源域的知識如何應用到目標域。將源域所學習到的知識應用到目標任務當中,用于提升在目標任務里模型的性能

所以遷移學習的主要目的就是借助其他的知識提升模型性能。

詳細了解可以參考這篇綜述:《A Survey on Transfer Learning》

二、Model Finetune:模型的遷移學習

訓練一個Model,就是去更新它的權值,這里的權值可以稱為知識,從AlexNet的卷積核可視化中,我們可以看到大多數卷積核為邊緣等信息,這些信息就是AlexNet在ImageNet上學習到的知識,所以可以把權值理解為神經網絡在特定任務中學習到的知識,而這些知識可以遷移,將其遷移到新任務中,這樣就完成了一個Transfer Learning,這就是模型微調,這就是為什么稱Model Finetune為Transfer Learning,它其實是將權值認為是知識,把這些知識應用到新任務中去。

為什么要 Model Finetune?

一般來說需要模型微調的任務都有如下特點:在新任務中數據量較小,不足以訓練一個較大的Model。可以用Model Finetune的方式輔助我們在新任務中訓練一個較好的模型,讓訓練過程更快。

模型微調的步驟

一般來說,一個神經網絡模型可以分為Features ExtractorClassifer兩部分,前者用于提取特征,后者用于合理分類,通常我們習慣對Features Extractor的結構和參數進行保留,而僅修改Classifer來適應新任務。這是因為新任務的數據量太小,預訓練參數已經具有共性,不再需要改變,如果再用這些小數據訓練,可能反而過擬合。

所以步驟如下:

  • 獲取預訓練模型參數
  • 加載參數至模型(load_state_dict)
  • 修改輸出層以適應新任務
  • 模型微調訓練方法

    因為需要保留Features Extractor的結構和參數,提出了兩種訓練方法:

  • 固定預訓練的參數:requires_grad = False 或者 lr = 0,即不更新參數;
  • Features Extractor部分設置很小的學習率,這里用到參數組(params_group)的概念,分組設置優化器的參數。
  • 三、看個例子:用ResNet18預訓練模型訓練一個圖片二分類任務

    涉及到的data:https://pan.baidu.com/s/115grxHrq6kMZBg6oC2fatg
    提取碼:yld7

    # -*- coding: utf-8 -*- import os import numpy as np import torch import torch.nn as nn from torch.utils.data import DataLoader import torchvision.transforms as transforms import torch.optim as optim from matplotlib import pyplot as pltimport sys hello_pytorch_DIR = os.path.abspath(os.path.dirname(__file__)+os.path.sep+".."+os.path.sep+"..") sys.path.append(hello_pytorch_DIR)from tools.my_dataset import AntsDataset from tools.common_tools import set_seed import torchvision.models as models import torchvision BASEDIR = os.path.dirname(os.path.abspath(__file__)) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") print("use device :{}".format(device))set_seed(1) # 設置隨機種子 label_name = {"ants": 0, "bees": 1}# 參數設置 MAX_EPOCH = 25 BATCH_SIZE = 16 LR = 0.001 log_interval = 10 val_interval = 1 classes = 2 start_epoch = -1 lr_decay_step = 7# ============================ step 1/5 數據 ============================ data_dir = os.path.abspath(os.path.join(BASEDIR, "..", "..", "data", "hymenoptera_data")) if not os.path.exists(data_dir):raise Exception("\n{} 不存在,請下載 07-02-數據-模型finetune.zip 放到\n{} 下,并解壓即可".format(data_dir, os.path.dirname(data_dir)))train_dir = os.path.join(data_dir, "train") valid_dir = os.path.join(data_dir, "val")norm_mean = [0.485, 0.456, 0.406] norm_std = [0.229, 0.224, 0.225]train_transform = transforms.Compose([transforms.RandomResizedCrop(224),transforms.RandomHorizontalFlip(),transforms.ToTensor(),transforms.Normalize(norm_mean, norm_std), ])valid_transform = transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize(norm_mean, norm_std), ])# 構建MyDataset實例 train_data = AntsDataset(data_dir=train_dir, transform=train_transform) valid_data = AntsDataset(data_dir=valid_dir, transform=valid_transform)# 構建DataLoder train_loader = DataLoader(dataset=train_data, batch_size=BATCH_SIZE, shuffle=True) valid_loader = DataLoader(dataset=valid_data, batch_size=BATCH_SIZE)# ============================ step 2/5 模型 ============================# 1/3 構建模型 resnet18_ft = models.resnet18()# 2/3 加載參數 # flag = 0 flag = 1 if flag:path_pretrained_model = os.path.join(BASEDIR, "..", "..", "data", "finetune_resnet18-5c106cde.pth")if not os.path.exists(path_pretrained_model):raise Exception("\n{} 不存在,請下載 07-02-數據-模型finetune.zip\n放到 {}下,并解壓即可".format(path_pretrained_model, os.path.dirname(path_pretrained_model)))state_dict_load = torch.load(path_pretrained_model)resnet18_ft.load_state_dict(state_dict_load)# 法1 : 凍結卷積層 flag_m1 = 0 # flag_m1 = 1 if flag_m1:for param in resnet18_ft.parameters():param.requires_grad = Falseprint("conv1.weights[0, 0, ...]:\n {}".format(resnet18_ft.conv1.weight[0, 0, ...]))# 3/3 替換fc層 num_ftrs = resnet18_ft.fc.in_features resnet18_ft.fc = nn.Linear(num_ftrs, classes)resnet18_ft.to(device) # ============================ step 3/5 損失函數 ============================ criterion = nn.CrossEntropyLoss() # 選擇損失函數# ============================ step 4/5 優化器 ============================ # 法2 : conv 小學習率 # flag = 0 flag = 1 if flag:fc_params_id = list(map(id, resnet18_ft.fc.parameters())) # 返回的是parameters的 內存地址base_params = filter(lambda p: id(p) not in fc_params_id, resnet18_ft.parameters())optimizer = optim.SGD([{'params': base_params, 'lr': LR*0}, # 0{'params': resnet18_ft.fc.parameters(), 'lr': LR}], momentum=0.9)else:optimizer = optim.SGD(resnet18_ft.parameters(), lr=LR, momentum=0.9) # 選擇優化器scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=lr_decay_step, gamma=0.1) # 設置學習率下降策略# ============================ step 5/5 訓練 ============================ train_curve = list() valid_curve = list()for epoch in range(start_epoch + 1, MAX_EPOCH):loss_mean = 0.correct = 0.total = 0.resnet18_ft.train()for i, data in enumerate(train_loader):# forwardinputs, labels = datainputs, labels = inputs.to(device), labels.to(device)outputs = resnet18_ft(inputs)# backwardoptimizer.zero_grad()loss = criterion(outputs, labels)loss.backward()# update weightsoptimizer.step()# 統計分類情況_, predicted = torch.max(outputs.data, 1)total += labels.size(0)correct += (predicted == labels).squeeze().cpu().sum().numpy()# 打印訓練信息loss_mean += loss.item()train_curve.append(loss.item())if (i+1) % log_interval == 0:loss_mean = loss_mean / log_intervalprint("Training:Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(epoch, MAX_EPOCH, i+1, len(train_loader), loss_mean, correct / total))loss_mean = 0.# if flag_m1:print("epoch:{} conv1.weights[0, 0, ...] :\n {}".format(epoch, resnet18_ft.conv1.weight[0, 0, ...]))scheduler.step() # 更新學習率# validate the modelif (epoch+1) % val_interval == 0:correct_val = 0.total_val = 0.loss_val = 0.resnet18_ft.eval()with torch.no_grad():for j, data in enumerate(valid_loader):inputs, labels = datainputs, labels = inputs.to(device), labels.to(device)outputs = resnet18_ft(inputs)loss = criterion(outputs, labels)_, predicted = torch.max(outputs.data, 1)total_val += labels.size(0)correct_val += (predicted == labels).squeeze().cpu().sum().numpy()loss_val += loss.item()loss_val_mean = loss_val/len(valid_loader)valid_curve.append(loss_val_mean)print("Valid:\t Epoch[{:0>3}/{:0>3}] Iteration[{:0>3}/{:0>3}] Loss: {:.4f} Acc:{:.2%}".format(epoch, MAX_EPOCH, j+1, len(valid_loader), loss_val_mean, correct_val / total_val))resnet18_ft.train()train_x = range(len(train_curve)) train_y = train_curvetrain_iters = len(train_loader) valid_x = np.arange(1, len(valid_curve)+1) * train_iters*val_interval # 由于valid中記錄的是epochloss,需要對記錄點進行轉換到iterations valid_y = valid_curveplt.plot(train_x, train_y, label='Train') plt.plot(valid_x, valid_y, label='Valid')plt.legend(loc='upper right') plt.ylabel('loss value') plt.xlabel('Iteration') plt.show()

    輸出結果為:

    use device :cpu Training:Epoch[000/025] Iteration[010/016] Loss: 0.6572 Acc:60.62% epoch:0 conv1.weights[0, 0, ...] :tensor([[-0.0104, -0.0061, -0.0018, 0.0748, 0.0566, 0.0171, -0.0127],[ 0.0111, 0.0095, -0.1099, -0.2805, -0.2712, -0.1291, 0.0037],[-0.0069, 0.0591, 0.2955, 0.5872, 0.5197, 0.2563, 0.0636],[ 0.0305, -0.0670, -0.2984, -0.4387, -0.2709, -0.0006, 0.0576],[-0.0275, 0.0160, 0.0726, -0.0541, -0.3328, -0.4206, -0.2578],[ 0.0306, 0.0410, 0.0628, 0.2390, 0.4138, 0.3936, 0.1661],[-0.0137, -0.0037, -0.0241, -0.0659, -0.1507, -0.0822, -0.0058]],grad_fn=<SelectBackward>) Valid: Epoch[000/025] Iteration[010/010] Loss: 0.4565 Acc:84.97% Training:Epoch[001/025] Iteration[010/016] Loss: 0.4074 Acc:85.00% epoch:1 conv1.weights[0, 0, ...] :tensor([[-0.0104, -0.0061, -0.0018, 0.0748, 0.0566, 0.0171, -0.0127],[ 0.0111, 0.0095, -0.1099, -0.2805, -0.2712, -0.1291, 0.0037],[-0.0069, 0.0591, 0.2955, 0.5872, 0.5197, 0.2563, 0.0636],[ 0.0305, -0.0670, -0.2984, -0.4387, -0.2709, -0.0006, 0.0576],[-0.0275, 0.0160, 0.0726, -0.0541, -0.3328, -0.4206, -0.2578],[ 0.0306, 0.0410, 0.0628, 0.2390, 0.4138, 0.3936, 0.1661],[-0.0137, -0.0037, -0.0241, -0.0659, -0.1507, -0.0822, -0.0058]],grad_fn=<SelectBackward>) Valid: Epoch[001/025] Iteration[010/010] Loss: 0.2846 Acc:93.46% Training:Epoch[002/025] Iteration[010/016] Loss: 0.3542 Acc:83.12% epoch:2 conv1.weights[0, 0, ...] :tensor([[-0.0104, -0.0061, -0.0018, 0.0748, 0.0566, 0.0171, -0.0127],[ 0.0111, 0.0095, -0.1099, -0.2805, -0.2712, -0.1291, 0.0037],[-0.0069, 0.0591, 0.2955, 0.5872, 0.5197, 0.2563, 0.0636],[ 0.0305, -0.0670, -0.2984, -0.4387, -0.2709, -0.0006, 0.0576],[-0.0275, 0.0160, 0.0726, -0.0541, -0.3328, -0.4206, -0.2578],[ 0.0306, 0.0410, 0.0628, 0.2390, 0.4138, 0.3936, 0.1661],[-0.0137, -0.0037, -0.0241, -0.0659, -0.1507, -0.0822, -0.0058]],grad_fn=<SelectBackward>) Valid: Epoch[002/025] Iteration[010/010] Loss: 0.2904 Acc:89.54% Training:Epoch[003/025] Iteration[010/016] Loss: 0.2266 Acc:93.12% epoch:3 conv1.weights[0, 0, ...] :tensor([[-0.0104, -0.0061, -0.0018, 0.0748, 0.0566, 0.0171, -0.0127],[ 0.0111, 0.0095, -0.1099, -0.2805, -0.2712, -0.1291, 0.0037],[-0.0069, 0.0591, 0.2955, 0.5872, 0.5197, 0.2563, 0.0636],[ 0.0305, -0.0670, -0.2984, -0.4387, -0.2709, -0.0006, 0.0576],[-0.0275, 0.0160, 0.0726, -0.0541, -0.3328, -0.4206, -0.2578],[ 0.0306, 0.0410, 0.0628, 0.2390, 0.4138, 0.3936, 0.1661],[-0.0137, -0.0037, -0.0241, -0.0659, -0.1507, -0.0822, -0.0058]],grad_fn=<SelectBackward>) Valid: Epoch[003/025] Iteration[010/010] Loss: 0.2252 Acc:94.12% Training:Epoch[004/025] Iteration[010/016] Loss: 0.2805 Acc:87.50% epoch:4 conv1.weights[0, 0, ...] :tensor([[-0.0104, -0.0061, -0.0018, 0.0748, 0.0566, 0.0171, -0.0127],[ 0.0111, 0.0095, -0.1099, -0.2805, -0.2712, -0.1291, 0.0037],[-0.0069, 0.0591, 0.2955, 0.5872, 0.5197, 0.2563, 0.0636],[ 0.0305, -0.0670, -0.2984, -0.4387, -0.2709, -0.0006, 0.0576],[-0.0275, 0.0160, 0.0726, -0.0541, -0.3328, -0.4206, -0.2578],[ 0.0306, 0.0410, 0.0628, 0.2390, 0.4138, 0.3936, 0.1661],[-0.0137, -0.0037, -0.0241, -0.0659, -0.1507, -0.0822, -0.0058]],grad_fn=<SelectBackward>) Valid: Epoch[004/025] Iteration[010/010] Loss: 0.1953 Acc:95.42% Training:Epoch[005/025] Iteration[010/016] Loss: 0.2423 Acc:91.88% epoch:5 conv1.weights[0, 0, ...] :tensor([[-0.0104, -0.0061, -0.0018, 0.0748, 0.0566, 0.0171, -0.0127],[ 0.0111, 0.0095, -0.1099, -0.2805, -0.2712, -0.1291, 0.0037],[-0.0069, 0.0591, 0.2955, 0.5872, 0.5197, 0.2563, 0.0636],[ 0.0305, -0.0670, -0.2984, -0.4387, -0.2709, -0.0006, 0.0576],[-0.0275, 0.0160, 0.0726, -0.0541, -0.3328, -0.4206, -0.2578],[ 0.0306, 0.0410, 0.0628, 0.2390, 0.4138, 0.3936, 0.1661],[-0.0137, -0.0037, -0.0241, -0.0659, -0.1507, -0.0822, -0.0058]],grad_fn=<SelectBackward>) Valid: Epoch[005/025] Iteration[010/010] Loss: 0.2399 Acc:92.16% Training:Epoch[006/025] Iteration[010/016] Loss: 0.2455 Acc:90.00% epoch:6 conv1.weights[0, 0, ...] :tensor([[-0.0104, -0.0061, -0.0018, 0.0748, 0.0566, 0.0171, -0.0127],[ 0.0111, 0.0095, -0.1099, -0.2805, -0.2712, -0.1291, 0.0037],[-0.0069, 0.0591, 0.2955, 0.5872, 0.5197, 0.2563, 0.0636],[ 0.0305, -0.0670, -0.2984, -0.4387, -0.2709, -0.0006, 0.0576],[-0.0275, 0.0160, 0.0726, -0.0541, -0.3328, -0.4206, -0.2578],[ 0.0306, 0.0410, 0.0628, 0.2390, 0.4138, 0.3936, 0.1661],[-0.0137, -0.0037, -0.0241, -0.0659, -0.1507, -0.0822, -0.0058]],grad_fn=<SelectBackward>)

    可以看出,模型的訓練從一開始就有了較高的準確率,比較快速地進入了較好訓練狀態,相比于不借助其他知識的普通訓練,速度上要快很多。

    而且這里是用分組參數的方法將特征提取部分的學習率設置為0,這樣就不改變特征提取部分的參數了,而將全連接層的學習率正常設置,從上面的結果也能看出特征提取部分的權值一直沒有改變(改變的是全連接層的權值,所以準確率才會提升)。

    ps:這次筆記涉及的遷移學習的知識還只是基礎,以后若有需要還要更加深入。

    總結

    以上是生活随笔為你收集整理的PyTorch框架学习二十——模型微调(Finetune)的全部內容,希望文章能夠幫你解決所遇到的問題。

    如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。