• 教程 >
  • TorchVision 目标检测微调教程
快捷方式

TorchVision 目标检测微调教程

在本教程中,我们将对一个预训练的 Mask R-CNN 模型在 Penn-Fudan 行人检测和分割数据库 上进行微调。它包含 170 张图像,其中包含 345 个行人实例,我们将使用它来说明如何在 torchvision 中使用新功能,以便在自定义数据集上训练目标检测和实例分割模型。

注意

本教程仅适用于 torchvision 版本 >=0.16 或 nightly 版本。如果您使用的是 torchvision<=0.15,请改用 本教程

定义数据集

用于训练目标检测、实例分割和人关键点检测的参考脚本允许轻松支持添加新的自定义数据集。数据集应继承自标准的 torch.utils.data.Dataset 类,并实现 __len____getitem__

我们唯一的要求是数据集 __getitem__ 应返回一个元组

  • image: 形状为 [3, H, W]torchvision.tv_tensors.Image,一个纯张量,或大小为 (H, W) 的 PIL 图像。

  • target: 包含以下字段的字典

    • boxes,形状为 [N, 4]torchvision.tv_tensors.BoundingBoxesN 个边界框的坐标,格式为 [x0, y0, x1, y1],范围从 0W0H

    • labels,形状为 [N] 的整数 torch.Tensor:每个边界框的标签。 0 始终表示背景类。

    • image_id,整数:图像标识符。它在数据集中的所有图像之间应该是唯一的,并在评估期间使用。

    • area,形状为 [N] 的浮点数 torch.Tensor:边界框的面积。这在使用 COCO 度量进行评估时使用,用于在小、中、大框之间分离度量分数。

    • iscrowd,形状为 [N] 的 uint8 torch.Tensor:具有 iscrowd=True 的实例在评估期间将被忽略。

    • (可选) masks,形状为 [N, H, W]torchvision.tv_tensors.Mask:每个对象的分割掩码。

如果您的数据集符合上述要求,则它将适用于参考脚本中的训练和评估代码。评估代码将使用 pycocotools 中的脚本,可以使用 pip install pycocotools 安装。

注意

对于 Windows,请从 gautamchitnis 使用以下命令安装 pycocotools

pip install git+https://github.com/gautamchitnis/cocoapi.git@cocodataset-master#subdirectory=PythonAPI

关于 labels 的一个说明。模型将类 0 视为背景。如果您的数据集不包含背景类,则您的 labels 中不应包含 0。例如,假设您只有两个类,,您可以定义 1(而不是 0)表示2 表示。因此,例如,如果其中一张图像同时包含这两个类,则您的 labels 张量应如下所示:[1, 2]

此外,如果您希望在训练期间使用纵横比分组(以便每个批次仅包含具有相似纵横比的图像),则建议您还实现一个 get_height_and_width 方法,该方法返回图像的高度和宽度。如果没有提供此方法,我们将通过 __getitem__ 查询数据集的所有元素,这会将图像加载到内存中,并且比提供自定义方法要慢。

为 PennFudan 编写自定义数据集

让我们为 PennFudan 数据集编写一个数据集。首先,让我们下载数据集并解压缩 zip 文件

wget https://www.cis.upenn.edu/~jshi/ped_html/PennFudanPed.zip -P data
cd data && unzip PennFudanPed.zip

我们有以下文件夹结构

PennFudanPed/
  PedMasks/
    FudanPed00001_mask.png
    FudanPed00002_mask.png
    FudanPed00003_mask.png
    FudanPed00004_mask.png
    ...
  PNGImages/
    FudanPed00001.png
    FudanPed00002.png
    FudanPed00003.png
    FudanPed00004.png

这是一个图像和分割掩码对的示例

import matplotlib.pyplot as plt
from torchvision.io import read_image


image = read_image("data/PennFudanPed/PNGImages/FudanPed00046.png")
mask = read_image("data/PennFudanPed/PedMasks/FudanPed00046_mask.png")

plt.figure(figsize=(16, 8))
plt.subplot(121)
plt.title("Image")
plt.imshow(image.permute(1, 2, 0))
plt.subplot(122)
plt.title("Mask")
plt.imshow(mask.permute(1, 2, 0))
Image, Mask
<matplotlib.image.AxesImage object at 0x7efa0e5b9360>

因此,每个图像都有一个对应的分割掩码,其中每种颜色对应一个不同的实例。让我们为该数据集编写一个 torch.utils.data.Dataset 类。在下面的代码中,我们将图像、边界框和掩码包装到 torchvision.tv_tensors.TVTensor 类中,以便我们能够为给定的目标检测和分割任务应用 torchvision 内置的转换(新的转换 API)。具体来说,图像张量将被 torchvision.tv_tensors.Image 包裹,边界框被 torchvision.tv_tensors.BoundingBoxes 包裹,掩码被 torchvision.tv_tensors.Mask 包裹。由于 torchvision.tv_tensors.TVTensortorch.Tensor 的子类,因此包装后的对象也是张量,并继承了普通的 torch.Tensor API。有关 torchvision tv_tensors 的更多信息,请参阅 此文档

import os
import torch

from torchvision.io import read_image
from torchvision.ops.boxes import masks_to_boxes
from torchvision import tv_tensors
from torchvision.transforms.v2 import functional as F


class PennFudanDataset(torch.utils.data.Dataset):
    def __init__(self, root, transforms):
        self.root = root
        self.transforms = transforms
        # load all image files, sorting them to
        # ensure that they are aligned
        self.imgs = list(sorted(os.listdir(os.path.join(root, "PNGImages"))))
        self.masks = list(sorted(os.listdir(os.path.join(root, "PedMasks"))))

    def __getitem__(self, idx):
        # load images and masks
        img_path = os.path.join(self.root, "PNGImages", self.imgs[idx])
        mask_path = os.path.join(self.root, "PedMasks", self.masks[idx])
        img = read_image(img_path)
        mask = read_image(mask_path)
        # instances are encoded as different colors
        obj_ids = torch.unique(mask)
        # first id is the background, so remove it
        obj_ids = obj_ids[1:]
        num_objs = len(obj_ids)

        # split the color-encoded mask into a set
        # of binary masks
        masks = (mask == obj_ids[:, None, None]).to(dtype=torch.uint8)

        # get bounding box coordinates for each mask
        boxes = masks_to_boxes(masks)

        # there is only one class
        labels = torch.ones((num_objs,), dtype=torch.int64)

        image_id = idx
        area = (boxes[:, 3] - boxes[:, 1]) * (boxes[:, 2] - boxes[:, 0])
        # suppose all instances are not crowd
        iscrowd = torch.zeros((num_objs,), dtype=torch.int64)

        # Wrap sample and targets into torchvision tv_tensors:
        img = tv_tensors.Image(img)

        target = {}
        target["boxes"] = tv_tensors.BoundingBoxes(boxes, format="XYXY", canvas_size=F.get_size(img))
        target["masks"] = tv_tensors.Mask(masks)
        target["labels"] = labels
        target["image_id"] = image_id
        target["area"] = area
        target["iscrowd"] = iscrowd

        if self.transforms is not None:
            img, target = self.transforms(img, target)

        return img, target

    def __len__(self):
        return len(self.imgs)

数据集部分就到这里。现在让我们定义一个可以在此数据集上执行预测的模型。

定义您的模型

在本教程中,我们将使用 Mask R-CNN,它基于 Faster R-CNN。Faster R-CNN 是一个模型,它可以预测图像中潜在对象的边界框和类别分数。

../_static/img/tv_tutorial/tv_image03.png

Mask R-CNN 在 Faster R-CNN 中添加了一个额外的分支,该分支还预测每个实例的分割掩码。

../_static/img/tv_tutorial/tv_image04.png

在两种常见情况下,人们可能希望修改 TorchVision 模型库中可用的模型之一。第一种情况是当我们想从预训练模型开始,只微调最后一层。另一种情况是当我们想用不同的骨干网络替换模型的骨干网络(例如,为了更快地进行预测)。

让我们在以下部分看看如何进行其中一种或另一种操作。

1 - 从预训练模型微调

假设您想从在 COCO 上预训练的模型开始,并希望针对您的特定类别对其进行微调。以下是一种可能的方法。

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor

# load a model pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights="DEFAULT")

# replace the classifier with a new one, that has
# num_classes which is user-defined
num_classes = 2  # 1 class (person) + background
# get number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
Downloading: "https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth

  0%|          | 0.00/160M [00:00<?, ?B/s]
 19%|#8        | 29.6M/160M [00:00<00:00, 311MB/s]
 38%|###8      | 61.4M/160M [00:00<00:00, 323MB/s]
 59%|#####9    | 94.6M/160M [00:00<00:00, 335MB/s]
 80%|########  | 128M/160M [00:00<00:00, 341MB/s]
100%|##########| 160M/160M [00:00<00:00, 338MB/s]

2 - 修改模型以添加不同的骨干网络

import torchvision
from torchvision.models.detection import FasterRCNN
from torchvision.models.detection.rpn import AnchorGenerator

# load a pre-trained model for classification and return
# only the features
backbone = torchvision.models.mobilenet_v2(weights="DEFAULT").features
# ``FasterRCNN`` needs to know the number of
# output channels in a backbone. For mobilenet_v2, it's 1280
# so we need to add it here
backbone.out_channels = 1280

# let's make the RPN generate 5 x 3 anchors per spatial
# location, with 5 different sizes and 3 different aspect
# ratios. We have a Tuple[Tuple[int]] because each feature
# map could potentially have different sizes and
# aspect ratios
anchor_generator = AnchorGenerator(
    sizes=((32, 64, 128, 256, 512),),
    aspect_ratios=((0.5, 1.0, 2.0),)
)

# let's define what are the feature maps that we will
# use to perform the region of interest cropping, as well as
# the size of the crop after rescaling.
# if your backbone returns a Tensor, featmap_names is expected to
# be [0]. More generally, the backbone should return an
# ``OrderedDict[Tensor]``, and in ``featmap_names`` you can choose which
# feature maps to use.
roi_pooler = torchvision.ops.MultiScaleRoIAlign(
    featmap_names=['0'],
    output_size=7,
    sampling_ratio=2
)

# put the pieces together inside a Faster-RCNN model
model = FasterRCNN(
    backbone,
    num_classes=2,
    rpn_anchor_generator=anchor_generator,
    box_roi_pool=roi_pooler
)
Downloading: "https://download.pytorch.org/models/mobilenet_v2-7ebf99e0.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/mobilenet_v2-7ebf99e0.pth

  0%|          | 0.00/13.6M [00:00<?, ?B/s]
100%|##########| 13.6M/13.6M [00:00<00:00, 295MB/s]

用于 PennFudan 数据集的目标检测和实例分割模型

在我们的案例中,我们希望从预训练模型进行微调,因为我们的数据集非常小,所以我们将遵循方法 1。

在这里,我们还希望计算实例分割掩码,因此我们将使用 Mask R-CNN。

import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor


def get_model_instance_segmentation(num_classes):
    # load an instance segmentation model pre-trained on COCO
    model = torchvision.models.detection.maskrcnn_resnet50_fpn(weights="DEFAULT")

    # get number of input features for the classifier
    in_features = model.roi_heads.box_predictor.cls_score.in_features
    # replace the pre-trained head with a new one
    model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

    # now get the number of input features for the mask classifier
    in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
    hidden_layer = 256
    # and replace the mask predictor with a new one
    model.roi_heads.mask_predictor = MaskRCNNPredictor(
        in_features_mask,
        hidden_layer,
        num_classes
    )

    return model

就是这样,这将使 model 准备好接受训练并在您的自定义数据集上进行评估。

将所有内容整合在一起

references/detection/ 中,我们有一些辅助函数来简化训练和评估检测模型。在这里,我们将使用 references/detection/engine.pyreferences/detection/utils.py。只需将 references/detection 下的所有内容下载到您的文件夹中并在此处使用即可。在 Linux 上,如果您有 wget,则可以使用以下命令下载它们。

os.system("wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/engine.py")
os.system("wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/utils.py")
os.system("wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/coco_utils.py")
os.system("wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/coco_eval.py")
os.system("wget https://raw.githubusercontent.com/pytorch/vision/main/references/detection/transforms.py")
0

从 v0.15.0 开始,torchvision 提供了 新的转换 API,以便轻松地为目标检测和分割任务编写数据增强管道。

让我们编写一些用于数据增强/转换的辅助函数。

from torchvision.transforms import v2 as T


def get_transform(train):
    transforms = []
    if train:
        transforms.append(T.RandomHorizontalFlip(0.5))
    transforms.append(T.ToDtype(torch.float, scale=True))
    transforms.append(T.ToPureTensor())
    return T.Compose(transforms)

测试 forward() 方法(可选)

在迭代数据集之前,最好查看模型在训练和推理时间对样本数据期望什么。

import utils

model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights="DEFAULT")
dataset = PennFudanDataset('data/PennFudanPed', get_transform(train=True))
data_loader = torch.utils.data.DataLoader(
    dataset,
    batch_size=2,
    shuffle=True,
    collate_fn=utils.collate_fn
)

# For Training
images, targets = next(iter(data_loader))
images = list(image for image in images)
targets = [{k: v for k, v in t.items()} for t in targets]
output = model(images, targets)  # Returns losses and detections
print(output)

# For inference
model.eval()
x = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]
predictions = model(x)  # Returns predictions
print(predictions[0])
{'loss_classifier': tensor(0.0808, grad_fn=<NllLossBackward0>), 'loss_box_reg': tensor(0.0284, grad_fn=<DivBackward0>), 'loss_objectness': tensor(0.0186, grad_fn=<BinaryCrossEntropyWithLogitsBackward0>), 'loss_rpn_box_reg': tensor(0.0034, grad_fn=<DivBackward0>)}
{'boxes': tensor([], size=(0, 4), grad_fn=<StackBackward0>), 'labels': tensor([], dtype=torch.int64), 'scores': tensor([], grad_fn=<IndexBackward0>)}

现在让我们编写执行训练和验证的主函数。

from engine import train_one_epoch, evaluate

# train on the GPU or on the CPU, if a GPU is not available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')

# our dataset has two classes only - background and person
num_classes = 2
# use our dataset and defined transformations
dataset = PennFudanDataset('data/PennFudanPed', get_transform(train=True))
dataset_test = PennFudanDataset('data/PennFudanPed', get_transform(train=False))

# split the dataset in train and test set
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-50])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-50:])

# define training and validation data loaders
data_loader = torch.utils.data.DataLoader(
    dataset,
    batch_size=2,
    shuffle=True,
    collate_fn=utils.collate_fn
)

data_loader_test = torch.utils.data.DataLoader(
    dataset_test,
    batch_size=1,
    shuffle=False,
    collate_fn=utils.collate_fn
)

# get the model using our helper function
model = get_model_instance_segmentation(num_classes)

# move model to the right device
model.to(device)

# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.SGD(
    params,
    lr=0.005,
    momentum=0.9,
    weight_decay=0.0005
)

# and a learning rate scheduler
lr_scheduler = torch.optim.lr_scheduler.StepLR(
    optimizer,
    step_size=3,
    gamma=0.1
)

# let's train it just for 2 epochs
num_epochs = 2

for epoch in range(num_epochs):
    # train for one epoch, printing every 10 iterations
    train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
    # update the learning rate
    lr_scheduler.step()
    # evaluate on the test dataset
    evaluate(model, data_loader_test, device=device)

print("That's it!")
Downloading: "https://download.pytorch.org/models/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/maskrcnn_resnet50_fpn_coco-bf2d0c1e.pth

  0%|          | 0.00/170M [00:00<?, ?B/s]
 20%|#9        | 33.2M/170M [00:00<00:00, 346MB/s]
 41%|####      | 69.4M/170M [00:00<00:00, 365MB/s]
 61%|######1   | 104M/170M [00:00<00:00, 338MB/s]
 86%|########5 | 145M/170M [00:00<00:00, 373MB/s]
100%|##########| 170M/170M [00:00<00:00, 373MB/s]
/var/lib/workspace/intermediate_source/engine.py:30: FutureWarning:

`torch.cuda.amp.autocast(args...)` is deprecated. Please use `torch.amp.autocast('cuda', args...)` instead.

Epoch: [0]  [ 0/60]  eta: 0:00:23  lr: 0.000090  loss: 4.9024 (4.9024)  loss_classifier: 0.4325 (0.4325)  loss_box_reg: 0.1060 (0.1060)  loss_mask: 4.3588 (4.3588)  loss_objectness: 0.0028 (0.0028)  loss_rpn_box_reg: 0.0023 (0.0023)  time: 0.3933  data: 0.0130  max mem: 2430
Epoch: [0]  [10/60]  eta: 0:00:11  lr: 0.000936  loss: 1.7738 (2.7698)  loss_classifier: 0.4092 (0.3555)  loss_box_reg: 0.3051 (0.2546)  loss_mask: 0.9490 (2.1314)  loss_objectness: 0.0219 (0.0214)  loss_rpn_box_reg: 0.0056 (0.0069)  time: 0.2262  data: 0.0146  max mem: 2597
Epoch: [0]  [20/60]  eta: 0:00:08  lr: 0.001783  loss: 0.8087 (1.7887)  loss_classifier: 0.2139 (0.2680)  loss_box_reg: 0.2062 (0.2336)  loss_mask: 0.3975 (1.2590)  loss_objectness: 0.0134 (0.0202)  loss_rpn_box_reg: 0.0076 (0.0080)  time: 0.2061  data: 0.0148  max mem: 2628
Epoch: [0]  [30/60]  eta: 0:00:06  lr: 0.002629  loss: 0.6611 (1.4258)  loss_classifier: 0.1405 (0.2256)  loss_box_reg: 0.2294 (0.2433)  loss_mask: 0.2604 (0.9278)  loss_objectness: 0.0162 (0.0192)  loss_rpn_box_reg: 0.0101 (0.0099)  time: 0.2104  data: 0.0158  max mem: 2770
Epoch: [0]  [40/60]  eta: 0:00:04  lr: 0.003476  loss: 0.5617 (1.2050)  loss_classifier: 0.0974 (0.1907)  loss_box_reg: 0.2434 (0.2350)  loss_mask: 0.2269 (0.7540)  loss_objectness: 0.0046 (0.0155)  loss_rpn_box_reg: 0.0118 (0.0098)  time: 0.2097  data: 0.0163  max mem: 2770
Epoch: [0]  [50/60]  eta: 0:00:02  lr: 0.004323  loss: 0.3587 (1.0395)  loss_classifier: 0.0546 (0.1627)  loss_box_reg: 0.1498 (0.2163)  loss_mask: 0.1641 (0.6382)  loss_objectness: 0.0020 (0.0129)  loss_rpn_box_reg: 0.0071 (0.0093)  time: 0.2049  data: 0.0156  max mem: 2770
Epoch: [0]  [59/60]  eta: 0:00:00  lr: 0.005000  loss: 0.3544 (0.9401)  loss_classifier: 0.0401 (0.1448)  loss_box_reg: 0.1229 (0.2037)  loss_mask: 0.1621 (0.5713)  loss_objectness: 0.0015 (0.0114)  loss_rpn_box_reg: 0.0063 (0.0089)  time: 0.2020  data: 0.0148  max mem: 2770
Epoch: [0] Total time: 0:00:12 (0.2094 s / it)
creating index...
index created!
Test:  [ 0/50]  eta: 0:00:04  model_time: 0.0770 (0.0770)  evaluator_time: 0.0061 (0.0061)  time: 0.0956  data: 0.0120  max mem: 2770
Test:  [49/50]  eta: 0:00:00  model_time: 0.0420 (0.0572)  evaluator_time: 0.0043 (0.0070)  time: 0.0636  data: 0.0095  max mem: 2770
Test: Total time: 0:00:03 (0.0752 s / it)
Averaged stats: model_time: 0.0420 (0.0572)  evaluator_time: 0.0043 (0.0070)
Accumulating evaluation results...
DONE (t=0.01s).
Accumulating evaluation results...
DONE (t=0.01s).
IoU metric: bbox
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.644
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.985
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.850
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.288
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.669
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.656
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.278
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.694
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.694
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.367
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.692
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.703
IoU metric: segm
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.668
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.974
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.782
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.376
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.535
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.683
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.291
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.720
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.723
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.633
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.667
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.731
Epoch: [1]  [ 0/60]  eta: 0:00:11  lr: 0.005000  loss: 0.2642 (0.2642)  loss_classifier: 0.0176 (0.0176)  loss_box_reg: 0.0678 (0.0678)  loss_mask: 0.1755 (0.1755)  loss_objectness: 0.0001 (0.0001)  loss_rpn_box_reg: 0.0032 (0.0032)  time: 0.1864  data: 0.0140  max mem: 2770
Epoch: [1]  [10/60]  eta: 0:00:10  lr: 0.005000  loss: 0.3356 (0.3761)  loss_classifier: 0.0461 (0.0549)  loss_box_reg: 0.1351 (0.1442)  loss_mask: 0.1637 (0.1665)  loss_objectness: 0.0008 (0.0021)  loss_rpn_box_reg: 0.0082 (0.0085)  time: 0.2079  data: 0.0159  max mem: 2770
Epoch: [1]  [20/60]  eta: 0:00:08  lr: 0.005000  loss: 0.3356 (0.3486)  loss_classifier: 0.0441 (0.0457)  loss_box_reg: 0.1143 (0.1178)  loss_mask: 0.1725 (0.1763)  loss_objectness: 0.0008 (0.0016)  loss_rpn_box_reg: 0.0067 (0.0071)  time: 0.2043  data: 0.0155  max mem: 2770
Epoch: [1]  [30/60]  eta: 0:00:06  lr: 0.005000  loss: 0.3100 (0.3299)  loss_classifier: 0.0350 (0.0448)  loss_box_reg: 0.0861 (0.1122)  loss_mask: 0.1469 (0.1644)  loss_objectness: 0.0007 (0.0015)  loss_rpn_box_reg: 0.0045 (0.0069)  time: 0.2048  data: 0.0160  max mem: 2770
Epoch: [1]  [40/60]  eta: 0:00:04  lr: 0.005000  loss: 0.2991 (0.3240)  loss_classifier: 0.0370 (0.0436)  loss_box_reg: 0.0861 (0.1065)  loss_mask: 0.1461 (0.1650)  loss_objectness: 0.0009 (0.0018)  loss_rpn_box_reg: 0.0051 (0.0071)  time: 0.2056  data: 0.0166  max mem: 2770
Epoch: [1]  [50/60]  eta: 0:00:02  lr: 0.005000  loss: 0.2602 (0.3125)  loss_classifier: 0.0292 (0.0415)  loss_box_reg: 0.0558 (0.1000)  loss_mask: 0.1560 (0.1628)  loss_objectness: 0.0008 (0.0017)  loss_rpn_box_reg: 0.0041 (0.0065)  time: 0.2043  data: 0.0155  max mem: 2770
Epoch: [1]  [59/60]  eta: 0:00:00  lr: 0.005000  loss: 0.2345 (0.2986)  loss_classifier: 0.0269 (0.0404)  loss_box_reg: 0.0512 (0.0937)  loss_mask: 0.1260 (0.1565)  loss_objectness: 0.0008 (0.0017)  loss_rpn_box_reg: 0.0033 (0.0063)  time: 0.2060  data: 0.0158  max mem: 2770
Epoch: [1] Total time: 0:00:12 (0.2052 s / it)
creating index...
index created!
Test:  [ 0/50]  eta: 0:00:02  model_time: 0.0407 (0.0407)  evaluator_time: 0.0031 (0.0031)  time: 0.0561  data: 0.0119  max mem: 2770
Test:  [49/50]  eta: 0:00:00  model_time: 0.0398 (0.0407)  evaluator_time: 0.0029 (0.0040)  time: 0.0543  data: 0.0095  max mem: 2770
Test: Total time: 0:00:02 (0.0555 s / it)
Averaged stats: model_time: 0.0398 (0.0407)  evaluator_time: 0.0029 (0.0040)
Accumulating evaluation results...
DONE (t=0.01s).
Accumulating evaluation results...
DONE (t=0.01s).
IoU metric: bbox
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.742
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.993
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.929
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.491
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.675
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.755
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.317
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.791
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.791
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.500
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.758
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.803
IoU metric: segm
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.723
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.972
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.888
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.462
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.572
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.741
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.309
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.767
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.767
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.567
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.692
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.780
That's it!

因此,经过一个时期的训练后,我们获得了 COCO 风格的 mAP > 50 和 65 的掩码 mAP。

但是预测结果是什么样的呢?让我们取数据集中的一个图像并进行验证。

import matplotlib.pyplot as plt

from torchvision.utils import draw_bounding_boxes, draw_segmentation_masks


image = read_image("data/PennFudanPed/PNGImages/FudanPed00046.png")
eval_transform = get_transform(train=False)

model.eval()
with torch.no_grad():
    x = eval_transform(image)
    # convert RGBA -> RGB and move to device
    x = x[:3, ...].to(device)
    predictions = model([x, ])
    pred = predictions[0]


image = (255.0 * (image - image.min()) / (image.max() - image.min())).to(torch.uint8)
image = image[:3, ...]
pred_labels = [f"pedestrian: {score:.3f}" for label, score in zip(pred["labels"], pred["scores"])]
pred_boxes = pred["boxes"].long()
output_image = draw_bounding_boxes(image, pred_boxes, pred_labels, colors="red")

masks = (pred["masks"] > 0.7).squeeze(1)
output_image = draw_segmentation_masks(output_image, masks, alpha=0.5, colors="blue")


plt.figure(figsize=(12, 12))
plt.imshow(output_image.permute(1, 2, 0))
torchvision tutorial
<matplotlib.image.AxesImage object at 0x7efa0d9bbd30>

结果看起来不错!

总结

在本教程中,您学习了如何在自定义数据集上为目标检测模型创建自己的训练管道。为此,您编写了一个 torch.utils.data.Dataset 类,该类返回图像以及真实边界框和分割掩码。您还利用了在 COCO train2017 上预训练的 Mask R-CNN 模型,以便在这个新数据集上执行迁移学习。

有关更完整的示例(包括多机器/多GPU训练),请查看 torchvision 代码库中的 references/detection/train.py

脚本总运行时间:(0 分钟 45.958 秒)

由 Sphinx-Gallery 生成图库

文档

访问 PyTorch 的全面开发者文档

查看文档

教程

获取适合初学者和高级开发人员的深入教程

查看教程

资源

查找开发资源并获得解答

查看资源