注意
点击此处下载完整示例代码
入门 || 张量 || Autograd || 构建模型 || TensorBoard 支持 || 训练模型 || 模型理解
使用 PyTorch 进行训练¶
创建日期:2021 年 11 月 30 日 | 最后更新:2023 年 5 月 31 日 | 最后验证:2024 年 11 月 05 日
观看下面的视频或在 youtube 上观看。
引言¶
在过去的视频中,我们讨论并演示了
使用 torch.nn 模块的神经网络层和函数构建模型
自动化梯度计算的机制,这是基于梯度的模型训练的核心
使用 TensorBoard 可视化训练进度和其他活动
在本视频中,我们将为你的工具库添加一些新工具
我们将熟悉数据集(Dataset)和数据加载器(DataLoader)抽象,以及它们如何简化在训练循环中向模型馈送数据的过程
我们将讨论特定的损失函数以及何时使用它们
我们将研究 PyTorch 优化器,它们实现了基于损失函数结果调整模型权重的算法
最后,我们将所有这些组合起来,并看到完整的 PyTorch 训练循环实际运行。
Dataset 和 DataLoader¶
Dataset
和 DataLoader
类封装了从存储中拉取数据并以批量形式将其暴露给训练循环的过程。
Dataset
负责访问和处理单个数据实例。
DataLoader
从 Dataset
中拉取数据实例(自动或使用你定义的采样器),将它们收集成批量,并返回供你的训练循环使用。DataLoader
适用于各种数据集,无论其包含的数据类型如何。
对于本教程,我们将使用 TorchVision 提供的 Fashion-MNIST 数据集。我们使用 torchvision.transforms.Normalize()
函数对图像块内容的分布进行零中心化和归一化,并下载训练和验证数据拆分。
import torch
import torchvision
import torchvision.transforms as transforms
# PyTorch TensorBoard support
from torch.utils.tensorboard import SummaryWriter
from datetime import datetime
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Create datasets for training & validation, download if necessary
training_set = torchvision.datasets.FashionMNIST('./data', train=True, transform=transform, download=True)
validation_set = torchvision.datasets.FashionMNIST('./data', train=False, transform=transform, download=True)
# Create data loaders for our datasets; shuffle for training, not for validation
training_loader = torch.utils.data.DataLoader(training_set, batch_size=4, shuffle=True)
validation_loader = torch.utils.data.DataLoader(validation_set, batch_size=4, shuffle=False)
# Class labels
classes = ('T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot')
# Report split sizes
print('Training set has {} instances'.format(len(training_set)))
print('Validation set has {} instances'.format(len(validation_set)))
0%| | 0.00/26.4M [00:00<?, ?B/s]
0%| | 65.5k/26.4M [00:00<01:12, 362kB/s]
1%| | 229k/26.4M [00:00<00:38, 681kB/s]
4%|3 | 950k/26.4M [00:00<00:11, 2.18MB/s]
15%|#4 | 3.83M/26.4M [00:00<00:02, 7.59MB/s]
36%|###6 | 9.63M/26.4M [00:00<00:01, 16.4MB/s]
57%|#####6 | 14.9M/26.4M [00:01<00:00, 20.8MB/s]
77%|#######7 | 20.4M/26.4M [00:01<00:00, 23.8MB/s]
100%|#########9| 26.3M/26.4M [00:01<00:00, 26.6MB/s]
100%|##########| 26.4M/26.4M [00:01<00:00, 18.2MB/s]
0%| | 0.00/29.5k [00:00<?, ?B/s]
100%|##########| 29.5k/29.5k [00:00<00:00, 327kB/s]
0%| | 0.00/4.42M [00:00<?, ?B/s]
1%|1 | 65.5k/4.42M [00:00<00:12, 361kB/s]
5%|5 | 229k/4.42M [00:00<00:06, 678kB/s]
21%|## | 918k/4.42M [00:00<00:01, 2.56MB/s]
44%|####3 | 1.93M/4.42M [00:00<00:00, 4.08MB/s]
100%|##########| 4.42M/4.42M [00:00<00:00, 6.06MB/s]
0%| | 0.00/5.15k [00:00<?, ?B/s]
100%|##########| 5.15k/5.15k [00:00<00:00, 58.2MB/s]
Training set has 60000 instances
Validation set has 10000 instances
照例,我们先可视化数据进行完整性检查
import matplotlib.pyplot as plt
import numpy as np
# Helper function for inline image display
def matplotlib_imshow(img, one_channel=False):
if one_channel:
img = img.mean(dim=0)
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
if one_channel:
plt.imshow(npimg, cmap="Greys")
else:
plt.imshow(np.transpose(npimg, (1, 2, 0)))
dataiter = iter(training_loader)
images, labels = next(dataiter)
# Create a grid from the images and show them
img_grid = torchvision.utils.make_grid(images)
matplotlib_imshow(img_grid, one_channel=True)
print(' '.join(classes[labels[j]] for j in range(4)))

Sandal Sneaker Coat Sneaker
模型¶
本示例中使用的模型是 LeNet-5 的变体 - 如果你观看了本系列的先前视频,应该对此很熟悉。
import torch.nn as nn
import torch.nn.functional as F
# PyTorch models inherit from torch.nn.Module
class GarmentClassifier(nn.Module):
def __init__(self):
super(GarmentClassifier, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 4 * 4)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
model = GarmentClassifier()
损失函数¶
在本示例中,我们将使用交叉熵损失。为了演示目的,我们将创建批量虚拟输出和标签值,将它们通过损失函数,并检查结果。
loss_fn = torch.nn.CrossEntropyLoss()
# NB: Loss functions expect data in batches, so we're creating batches of 4
# Represents the model's confidence in each of the 10 classes for a given input
dummy_outputs = torch.rand(4, 10)
# Represents the correct class among the 10 being tested
dummy_labels = torch.tensor([1, 5, 3, 7])
print(dummy_outputs)
print(dummy_labels)
loss = loss_fn(dummy_outputs, dummy_labels)
print('Total loss for this batch: {}'.format(loss.item()))
tensor([[0.7026, 0.1489, 0.0065, 0.6841, 0.4166, 0.3980, 0.9849, 0.6701, 0.4601,
0.8599],
[0.7461, 0.3920, 0.9978, 0.0354, 0.9843, 0.0312, 0.5989, 0.2888, 0.8170,
0.4150],
[0.8408, 0.5368, 0.0059, 0.8931, 0.3942, 0.7349, 0.5500, 0.0074, 0.0554,
0.1537],
[0.7282, 0.8755, 0.3649, 0.4566, 0.8796, 0.2390, 0.9865, 0.7549, 0.9105,
0.5427]])
tensor([1, 5, 3, 7])
Total loss for this batch: 2.428950071334839
优化器¶
在本示例中,我们将使用简单的带有动量的随机梯度下降。
尝试这种优化方案的一些变体可能会有所启发。
学习率决定了优化器采取的步长大小。不同的学习率对你的训练结果(在准确性和收敛时间方面)有何影响?
动量在多个步骤中推动优化器沿着最强梯度的方向前进。改变此值对你的结果有何影响?
尝试一些不同的优化算法,例如 averaged SGD、Adagrad 或 Adam。你的结果有何不同?
# Optimizers specified in the torch.optim package
optimizer = torch.optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
训练循环¶
下面是一个执行一个训练周期(epoch)的函数。它从 DataLoader 中枚举数据,并在循环的每次迭代中执行以下操作:
从 DataLoader 获取一批训练数据
将优化器的梯度归零
执行推理 - 即,获取模型对输入批量的预测结果
计算该组预测结果与数据集标签之间的损失
计算学习权重上的反向梯度
告诉优化器执行一个学习步骤 - 即,根据我们选择的优化算法,根据该批次观察到的梯度调整模型的学习权重
它每 1000 个批次报告一次损失。
最后,它报告最后 1000 个批次的平均每批次损失,以便与验证运行进行比较
def train_one_epoch(epoch_index, tb_writer):
running_loss = 0.
last_loss = 0.
# Here, we use enumerate(training_loader) instead of
# iter(training_loader) so that we can track the batch
# index and do some intra-epoch reporting
for i, data in enumerate(training_loader):
# Every data instance is an input + label pair
inputs, labels = data
# Zero your gradients for every batch!
optimizer.zero_grad()
# Make predictions for this batch
outputs = model(inputs)
# Compute the loss and its gradients
loss = loss_fn(outputs, labels)
loss.backward()
# Adjust learning weights
optimizer.step()
# Gather data and report
running_loss += loss.item()
if i % 1000 == 999:
last_loss = running_loss / 1000 # loss per batch
print(' batch {} loss: {}'.format(i + 1, last_loss))
tb_x = epoch_index * len(training_loader) + i + 1
tb_writer.add_scalar('Loss/train', last_loss, tb_x)
running_loss = 0.
return last_loss
每周期活动¶
每个周期(epoch)我们想做几件事:
通过检查模型在未用于训练的数据集上的相对损失来执行验证,并报告此结果
保存模型的一个副本
在这里,我们将在 TensorBoard 中进行报告。这将需要进入命令行启动 TensorBoard,并在另一个浏览器标签页中打开它。
# Initializing in a separate cell so we can easily add more epochs to the same run
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
writer = SummaryWriter('runs/fashion_trainer_{}'.format(timestamp))
epoch_number = 0
EPOCHS = 5
best_vloss = 1_000_000.
for epoch in range(EPOCHS):
print('EPOCH {}:'.format(epoch_number + 1))
# Make sure gradient tracking is on, and do a pass over the data
model.train(True)
avg_loss = train_one_epoch(epoch_number, writer)
running_vloss = 0.0
# Set the model to evaluation mode, disabling dropout and using population
# statistics for batch normalization.
model.eval()
# Disable gradient computation and reduce memory consumption.
with torch.no_grad():
for i, vdata in enumerate(validation_loader):
vinputs, vlabels = vdata
voutputs = model(vinputs)
vloss = loss_fn(voutputs, vlabels)
running_vloss += vloss
avg_vloss = running_vloss / (i + 1)
print('LOSS train {} valid {}'.format(avg_loss, avg_vloss))
# Log the running loss averaged per batch
# for both training and validation
writer.add_scalars('Training vs. Validation Loss',
{ 'Training' : avg_loss, 'Validation' : avg_vloss },
epoch_number + 1)
writer.flush()
# Track best performance, and save the model's state
if avg_vloss < best_vloss:
best_vloss = avg_vloss
model_path = 'model_{}_{}'.format(timestamp, epoch_number)
torch.save(model.state_dict(), model_path)
epoch_number += 1
EPOCH 1:
batch 1000 loss: 1.6334228541590274
batch 2000 loss: 0.8324381597135216
batch 3000 loss: 0.7350949151031673
batch 4000 loss: 0.6221513676682953
batch 5000 loss: 0.6008665340302978
batch 6000 loss: 0.5533551393696107
batch 7000 loss: 0.5268192595622968
batch 8000 loss: 0.4953766325986944
batch 9000 loss: 0.4763272075761342
batch 10000 loss: 0.48026260716759134
batch 11000 loss: 0.4555706014999887
batch 12000 loss: 0.43150419856602096
batch 13000 loss: 0.41889463035896185
batch 14000 loss: 0.4101380754457787
batch 15000 loss: 0.4188491042831447
LOSS train 0.4188491042831447 valid 0.42083388566970825
EPOCH 2:
batch 1000 loss: 0.39033183104451746
batch 2000 loss: 0.35730057470843896
batch 3000 loss: 0.3797398313785088
batch 4000 loss: 0.3595128281345387
batch 5000 loss: 0.3674602470536483
batch 6000 loss: 0.3695404906652402
batch 7000 loss: 0.38634192156628705
batch 8000 loss: 0.37888678515458013
batch 9000 loss: 0.32936658181797246
batch 10000 loss: 0.3460305611458316
batch 11000 loss: 0.355949883276422
batch 12000 loss: 0.34613123371596155
batch 13000 loss: 0.3435088261961791
batch 14000 loss: 0.35190882972519466
batch 15000 loss: 0.34078337761512373
LOSS train 0.34078337761512373 valid 0.3449384272098541
EPOCH 3:
batch 1000 loss: 0.3336456001721235
batch 2000 loss: 0.2948776570415939
batch 3000 loss: 0.30873254264354183
batch 4000 loss: 0.3269525112561532
batch 5000 loss: 0.3081500146031831
batch 6000 loss: 0.33906219027831686
batch 7000 loss: 0.3114977335120493
batch 8000 loss: 0.3028961390093173
batch 9000 loss: 0.31883212575598735
batch 10000 loss: 0.3121348040100274
batch 11000 loss: 0.3204089922408457
batch 12000 loss: 0.3172754702415841
batch 13000 loss: 0.3022056705406212
batch 14000 loss: 0.29925711060611504
batch 15000 loss: 0.3158802612772852
LOSS train 0.3158802612772852 valid 0.32655972242355347
EPOCH 4:
batch 1000 loss: 0.2793223039015138
batch 2000 loss: 0.2759745200898469
batch 3000 loss: 0.2885438525550344
batch 4000 loss: 0.29715126178535867
batch 5000 loss: 0.3092308461628054
batch 6000 loss: 0.29819886386692085
batch 7000 loss: 0.28212033420058286
batch 8000 loss: 0.2652145917697999
batch 9000 loss: 0.30505836525483027
batch 10000 loss: 0.28172129570529797
batch 11000 loss: 0.2760911153540328
batch 12000 loss: 0.29349113235381813
batch 13000 loss: 0.28226990548134745
batch 14000 loss: 0.2974613601177407
batch 15000 loss: 0.3016561955644138
LOSS train 0.3016561955644138 valid 0.3930961787700653
EPOCH 5:
batch 1000 loss: 0.2611404411364929
batch 2000 loss: 0.25894880425418887
batch 3000 loss: 0.2585991551137176
batch 4000 loss: 0.2808971864393097
batch 5000 loss: 0.26857244527151486
batch 6000 loss: 0.2778763904040534
batch 7000 loss: 0.2556428771363862
batch 8000 loss: 0.2892738865161955
batch 9000 loss: 0.2898595165217885
batch 10000 loss: 0.24955335284502145
batch 11000 loss: 0.27326060194405
batch 12000 loss: 0.2833696024138153
batch 13000 loss: 0.2705353221144751
batch 14000 loss: 0.24937306600230658
batch 15000 loss: 0.27901125454565046
LOSS train 0.27901125454565046 valid 0.3100835084915161
加载模型的一个已保存版本
saved_model = GarmentClassifier()
saved_model.load_state_dict(torch.load(PATH))
加载模型后,就可以将其用于你需要它做的任何事情了 - 进一步训练、推理或分析。
请注意,如果你的模型具有影响模型结构的构造函数参数,则需要提供这些参数,并将模型的配置与保存时的状态完全一致。
其他资源¶
pytorch.org 上关于数据实用工具的文档,包括 Dataset 和 DataLoader
关于在 GPU 训练中使用固定内存(pinned memory)的说明
关于 TorchVision、TorchText 和 TorchAudio 中可用数据集的文档
关于 PyTorch 中可用损失函数的文档
关于 torch.optim 包的文档,该包包括优化器和相关工具,例如学习率调度
关于保存和加载模型的详细教程
pytorch.org 的教程部分包含各种训练任务的教程,包括不同领域的分类、生成对抗网络、强化学习等
脚本总运行时间: ( 3 分钟 0.715 秒)