注意
转到结尾 下载完整的示例代码
使用 torch.compile 后端编译 Stable Diffusion 模型¶
此交互式脚本旨在作为使用 torch.compile 在 Stable Diffusion 模型上进行 Torch-TensorRT 工作流程的示例。下面展示了一个示例输出

导入和模型定义¶
import torch
import torch_tensorrt
from diffusers import DiffusionPipeline
model_id = "CompVis/stable-diffusion-v1-4"
device = "cuda:0"
# Instantiate Stable Diffusion Pipeline with FP16 weights
pipe = DiffusionPipeline.from_pretrained(
model_id, revision="fp16", torch_dtype=torch.float16
)
pipe = pipe.to(device)
backend = "torch_tensorrt"
# Optimize the UNet portion with Torch-TensorRT
pipe.unet = torch.compile(
pipe.unet,
backend=backend,
options={
"truncate_long_and_double": True,
"enabled_precisions": {torch.float32, torch.float16},
},
dynamic=False,
)
推理¶
prompt = "a majestic castle in the clouds"
image = pipe(prompt).images[0]
image.save("images/majestic_castle.png")
image.show()
脚本总运行时间: ( 0 分钟 0.000 秒)