快捷方式

面向英特尔® CPU 的 Intel® Extension for PyTorch* 后端

创建于:2023 年 10 月 03 日 | 最后更新:2024 年 06 月 11 日 | 最后验证:2024 年 11 月 05 日

为了更好地在英特尔® CPU 上使用 torch.compile,Intel® Extension for PyTorch* 实现了一个后端 ipex。它旨在提高英特尔平台上硬件资源的利用效率,以获得更好的性能。ipex 后端是通过 Intel® Extension for PyTorch* 中针对模型编译设计的进一步定制化实现的。

使用示例

训练 FP32

查看以下示例,了解如何将 ipex 后端与 torch.compile 结合使用,以进行 FP32 数据类型的模型训练。

import torch
import torchvision

LR = 0.001
DOWNLOAD = True
DATA = 'datasets/cifar10/'

transform = torchvision.transforms.Compose([
  torchvision.transforms.Resize((224, 224)),
  torchvision.transforms.ToTensor(),
  torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_dataset = torchvision.datasets.CIFAR10(
  root=DATA,
  train=True,
  transform=transform,
  download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(
  dataset=train_dataset,
  batch_size=128
)

model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr = LR, momentum=0.9)
model.train()

#################### code changes ####################
import intel_extension_for_pytorch as ipex

# Invoke the following API optionally, to apply frontend optimizations
model, optimizer = ipex.optimize(model, optimizer=optimizer)

compile_model = torch.compile(model, backend="ipex")
######################################################

for batch_idx, (data, target) in enumerate(train_loader):
    optimizer.zero_grad()
    output = compile_model(data)
    loss = criterion(output, target)
    loss.backward()
    optimizer.step()

训练 BF16

查看以下示例,了解如何将 ipex 后端与 torch.compile 结合使用,以进行 BFloat16 数据类型的模型训练。

import torch
import torchvision

LR = 0.001
DOWNLOAD = True
DATA = 'datasets/cifar10/'

transform = torchvision.transforms.Compose([
  torchvision.transforms.Resize((224, 224)),
  torchvision.transforms.ToTensor(),
  torchvision.transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
train_dataset = torchvision.datasets.CIFAR10(
  root=DATA,
  train=True,
  transform=transform,
  download=DOWNLOAD,
)
train_loader = torch.utils.data.DataLoader(
  dataset=train_dataset,
  batch_size=128
)

model = torchvision.models.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr = LR, momentum=0.9)
model.train()

#################### code changes ####################
import intel_extension_for_pytorch as ipex

# Invoke the following API optionally, to apply frontend optimizations
model, optimizer = ipex.optimize(model, dtype=torch.bfloat16, optimizer=optimizer)

compile_model = torch.compile(model, backend="ipex")
######################################################

with torch.cpu.amp.autocast():
    for batch_idx, (data, target) in enumerate(train_loader):
        optimizer.zero_grad()
        output = compile_model(data)
        loss = criterion(output, target)
        loss.backward()
        optimizer.step()

推理 FP32

查看以下示例,了解如何将 ipex 后端与 torch.compile 结合使用,以进行 FP32 数据类型的模型推理。

import torch
import torchvision.models as models

model = models.resnet50(weights='ResNet50_Weights.DEFAULT')
model.eval()
data = torch.rand(1, 3, 224, 224)

#################### code changes ####################
import intel_extension_for_pytorch as ipex

# Invoke the following API optionally, to apply frontend optimizations
model = ipex.optimize(model, weights_prepack=False)

compile_model = torch.compile(model, backend="ipex")
######################################################

with torch.no_grad():
    compile_model(data)

推理 BF16

查看以下示例,了解如何将 ipex 后端与 torch.compile 结合使用,以进行 BFloat16 数据类型的模型推理。

import torch
import torchvision.models as models

model = models.resnet50(weights='ResNet50_Weights.DEFAULT')
model.eval()
data = torch.rand(1, 3, 224, 224)

#################### code changes ####################
import intel_extension_for_pytorch as ipex

# Invoke the following API optionally, to apply frontend optimizations
model = ipex.optimize(model, dtype=torch.bfloat16, weights_prepack=False)

compile_model = torch.compile(model, backend="ipex")
######################################################

with torch.no_grad(), torch.autocast(device_type="cpu", dtype=torch.bfloat16):
    compile_model(data)

文档

获取 PyTorch 全面开发者文档

查看文档

教程

获取面向初学者和高级开发者的深度教程

查看教程

资源

查找开发资源并获取问题答案

查看资源