快捷方式

torch.library

torch.library 是一组用于扩展 PyTorch 核心算子库的 API。它包含用于测试自定义算子、创建新的自定义算子以及扩展使用 PyTorch C++ 算子注册 API(例如 aten 算子)定义的算子的实用工具。

有关有效使用这些 API 的详细指南,请参阅PyTorch 自定义算子登录页获取有关如何有效使用这些 API 的更多详情。

测试自定义算子

使用torch.library.opcheck()来测试自定义算子是否存在 Python torch.library 和/或 C++ TORCH_LIBRARY API 的不正确用法。此外,如果你的算子支持训练,请使用torch.autograd.gradcheck()来测试梯度在数学上是否正确。

torch.library.opcheck(op, args, kwargs=None, *, test_utils=('test_schema', 'test_autograd_registration', 'test_faketensor', 'test_aot_dispatch_dynamic'), raise_exception=True, atol=None, rtol=None)[source][source]

给定一个算子和一些示例参数,测试该算子是否已正确注册。

也就是说,当你使用 torch.library/TORCH_LIBRARY API 创建自定义算子时,你会指定关于该自定义算子的元数据(例如可变性信息),并且这些 API 要求你传递的函数满足某些属性(例如在 fake/meta/abstract kernel 中没有数据指针访问)opcheck测试这些元数据和属性。

具体来说,我们测试以下方面:

  • test_schema: 测试 schema 是否与算子的实现匹配。例如:如果 schema 指定一个 Tensor 会被改变(mutate),那么我们会检查实现是否确实改变了该 Tensor。如果 schema 指定返回一个新的 Tensor,那么我们会检查实现是否返回一个新的 Tensor(而不是现有 Tensor 或现有 Tensor 的视图)。

  • test_autograd_registration: 如果算子支持训练(autograd):我们检查其 autograd 公式是否通过 torch.library.register_autograd 或手动注册到一个或多个 DispatchKey::Autograd 键。任何其他基于 DispatchKey 的注册可能导致未定义行为。

  • test_faketensor: 测试算子是否有 FakeTensor kernel(以及它是否正确)。FakeTensor kernel 对于算子与 PyTorch 编译 API (torch.compile/export/FX) 协同工作是必要条件(但不是充分条件)。我们检查算子是否注册了 FakeTensor kernel(有时也称为 meta kernel),以及它是否正确。此测试会比较在真实 tensor 上运行算子的结果与在 FakeTensor 上运行算子的结果,并检查它们是否具有相同的 Tensor 元数据(大小/跨步/数据类型/设备等)。

  • test_aot_dispatch_dynamic: 测试算子在使用 PyTorch 编译 API (torch.compile/export/FX) 时是否表现正确。它检查在 eager-mode PyTorch 和 torch.compile 下的输出(如果适用,还有梯度)是否相同。此测试是test_faketensor的超集,并且是一个端到端测试;它测试的其他方面包括算子是否支持 functionalization 以及(如果存在)反向传播是否也支持 FakeTensor 和 functionalization。

为了获得最佳结果,请使用一组代表性输入多次调用opcheck。如果你的算子支持 autograd,请使用opcheck,并使用带有requires_grad = True的输入;如果你的算子支持多种设备(例如 CPU 和 CUDA),请使用opcheck并使用所有支持设备上的输入。

参数
  • op (Union[OpOverload, OpOverloadPacket, CustomOpDef]) – 算子。必须是使用torch.library.custom_op()装饰的函数,或者是 torch.ops.* 中找到的 OpOverload/OpOverloadPacket(例如 torch.ops.aten.sin, torch.ops.mylib.foo)

  • args (tuple[Any, ...]) – 传递给算子的位置参数 (args)

  • kwargs (Optional[dict[str, Any]]) – 传递给算子的关键字参数 (kwargs)

  • test_utils (Union[str, Sequence[str]]) – 应该运行的测试。默认:全部。示例:(“test_schema”, “test_faketensor”)

  • raise_exception (bool) – 是否在第一个错误时引发异常。如果为 False,将返回一个字典,其中包含每个测试是否通过的信息。

  • rtol (Optional[float]) – 浮点比较的相对容差。如果指定了atol,则也必须指定。如果省略,则根据dtype选择默认值(参见torch.testing.assert_close()中的表格)。

  • atol (Optional[float]) – 浮点比较的绝对容差。如果指定了rtol,则也必须指定。如果省略,则根据dtype选择默认值(参见torch.testing.assert_close()中的表格)。

返回类型

dict[str, str]

警告

opcheck 和torch.autograd.gradcheck()测试不同的内容;opcheck 测试你对 torch.library API 的使用是否正确,而torch.autograd.gradcheck()测试你的 autograd 公式在数学上是否正确。两者都应该用于测试支持梯度计算的自定义算子。

示例

>>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=())
>>> def numpy_mul(x: Tensor, y: float) -> Tensor:
>>>     x_np = x.numpy(force=True)
>>>     z_np = x_np * y
>>>     return torch.from_numpy(z_np).to(x.device)
>>>
>>> @numpy_mul.register_fake
>>> def _(x, y):
>>>     return torch.empty_like(x)
>>>
>>> def setup_context(ctx, inputs, output):
>>>     y, = inputs
>>>     ctx.y = y
>>>
>>> def backward(ctx, grad):
>>>     return grad * ctx.y, None
>>>
>>> numpy_mul.register_autograd(backward, setup_context=setup_context)
>>>
>>> sample_inputs = [
>>>     (torch.randn(3), 3.14),
>>>     (torch.randn(2, 3, device='cuda'), 2.718),
>>>     (torch.randn(1, 10, requires_grad=True), 1.234),
>>>     (torch.randn(64, 64, device='cuda', requires_grad=True), 90.18),
>>> ]
>>>
>>> for args in sample_inputs:
>>>     torch.library.opcheck(numpy_mul, args)

在 Python 中创建新的自定义算子

使用torch.library.custom_op()来创建新的自定义算子。

torch.library.custom_op(name, fn=None, /, *, mutates_args, device_types=None, schema=None)[source]

将函数封装成自定义算子。

创建自定义算子的原因包括: - 封装第三方库或自定义 kernel,使其与 PyTorch 子系统(如 Autograd)协同工作。 - 阻止 torch.compile/export/FX 追踪深入函数内部。

此 API 用作函数的装饰器(请参阅示例)。提供的函数必须有类型提示;这些对于与 PyTorch 的各种子系统进行接口交互是必需的。

参数
  • name (str) – 自定义算子的名称,格式为 “{namespace}::{name}”,例如 “mylib::my_linear”。此名称在 PyTorch 子系统(例如 torch.export, FX graphs)中用作算子的稳定标识符。为避免名称冲突,请使用你的项目名称作为 namespace;例如,pytorch/fbgemm 中的所有自定义算子都使用 “fbgemm” 作为 namespace。

  • mutates_args (Iterable[str] or "unknown") – 函数修改(mutate)的参数(args)的名称。这必须准确,否则行为是未定义的。如果为 “unknown”,则悲观地假定算子的所有输入都被修改。

  • device_types (None | str | Sequence[str]) – 函数有效的设备类型。如果未提供设备类型,则该函数将用作所有设备类型的默认实现。示例:“cpu”, “cuda”。当为不接受 Tensor 的算子注册设备特定实现时,我们要求该算子有一个 “device: torch.device argument”。

  • schema (None | str) – 算子的 schema 字符串。如果为 None(推荐),我们将从其类型注解推断算子的 schema。我们建议让系统推断 schema,除非你有特殊原因不这样做。自己编写 schema 容易出错。示例:“(Tensor x, int y) -> (Tensor, Tensor)”。

返回类型

Union[Callable[[Callable[[…], object]], CustomOpDef], CustomOpDef]

注意

我们建议不要传入schema参数,而是让我们从类型注解中推断它。自己编写 schema 容易出错。如果系统对类型注解的解释不是你想要的,你可能希望提供自己的 schema。有关如何编写 schema 字符串的更多信息,请参阅此处

示例:
>>> import torch
>>> from torch import Tensor
>>> from torch.library import custom_op
>>> import numpy as np
>>>
>>> @custom_op("mylib::numpy_sin", mutates_args=())
>>> def numpy_sin(x: Tensor) -> Tensor:
>>>     x_np = x.cpu().numpy()
>>>     y_np = np.sin(x_np)
>>>     return torch.from_numpy(y_np).to(device=x.device)
>>>
>>> x = torch.randn(3)
>>> y = numpy_sin(x)
>>> assert torch.allclose(y, x.sin())
>>>
>>> # Example of a custom op that only works for one device type.
>>> @custom_op("mylib::numpy_sin_cpu", mutates_args=(), device_types="cpu")
>>> def numpy_sin_cpu(x: Tensor) -> Tensor:
>>>     x_np = x.numpy()
>>>     y_np = np.sin(x_np)
>>>     return torch.from_numpy(y_np)
>>>
>>> x = torch.randn(3)
>>> y = numpy_sin_cpu(x)
>>> assert torch.allclose(y, x.sin())
>>>
>>> # Example of a custom op that mutates an input
>>> @custom_op("mylib::numpy_sin_inplace", mutates_args={"x"}, device_types="cpu")
>>> def numpy_sin_inplace(x: Tensor) -> None:
>>>     x_np = x.numpy()
>>>     np.sin(x_np, out=x_np)
>>>
>>> x = torch.randn(3)
>>> expected = x.sin()
>>> numpy_sin_inplace(x)
>>> assert torch.allclose(x, expected)
>>>
>>> # Example of a factory function
>>> @torch.library.custom_op("mylib::bar", mutates_args={}, device_types="cpu")
>>> def bar(device: torch.device) -> Tensor:
>>>     return torch.ones(3)
>>>
>>> bar("cpu")
torch.library.triton_op(name, fn=None, /, *, mutates_args, schema=None)[source]

创建一个由 1 个或多个 triton kernel 支持实现的自定义算子。

这是使用 triton kernel 与 PyTorch 协同工作的一种更结构化的方式。倾向于直接使用 triton kernel 而不带torch.library自定义算子包装器(例如torch.library.custom_op()torch.library.triton_op())因为这样更简单;只有当你想要创建一个行为类似于 PyTorch 内建算子的算子时,才使用torch.library.custom_op()/torch.library.triton_op()。例如,你可以使用torch.library包装器 API 来定义 triton kernel 在接收 tensor subclass 或在 TorchDispatchMode 下的行为。

请注意,当实现由 1 个或多个 triton kernel 组成时,使用torch.library.triton_op()代替torch.library.custom_op()torch.library.custom_op()将自定义算子视为不透明(torch.compile()torch.export.export()永远不会追踪进入它们),但triton_op使实现对这些子系统可见,从而允许它们优化 triton kernel。

请注意,fn必须只包含对 PyTorch 可识别的算子和 triton kernel 的调用。在fn内部调用的任何 triton kernel 必须包装在对torch.library.wrap_triton()的调用中。

参数
  • name (str) – 自定义算子的名称,格式为 “{namespace}::{name}”,例如 “mylib::my_linear”。此名称在 PyTorch 子系统(例如 torch.export, FX graphs)中用作算子的稳定标识符。为避免名称冲突,请使用你的项目名称作为 namespace;例如,pytorch/fbgemm 中的所有自定义算子都使用 “fbgemm” 作为 namespace。

  • mutates_args (Iterable[str] or "unknown") – 函数修改(mutate)的参数(args)的名称。这必须准确,否则行为是未定义的。如果为 “unknown”,则悲观地假定算子的所有输入都被修改。

  • schema (None | str) – 算子的 schema 字符串。如果为 None(推荐),我们将从其类型注解推断算子的 schema。我们建议让系统推断 schema,除非你有特殊原因不这样做。自己编写 schema 容易出错。示例:“(Tensor x, int y) -> (Tensor, Tensor)”。

返回类型

可调用对象

示例

>>> import torch
>>> from torch.library import triton_op, wrap_triton
>>>
>>> import triton
>>> from triton import language as tl
>>>
>>> @triton.jit
>>> def add_kernel(
>>>     in_ptr0,
>>>     in_ptr1,
>>>     out_ptr,
>>>     n_elements,
>>>     BLOCK_SIZE: "tl.constexpr",
>>> ):
>>>     pid = tl.program_id(axis=0)
>>>     block_start = pid * BLOCK_SIZE
>>>     offsets = block_start + tl.arange(0, BLOCK_SIZE)
>>>     mask = offsets < n_elements
>>>     x = tl.load(in_ptr0 + offsets, mask=mask)
>>>     y = tl.load(in_ptr1 + offsets, mask=mask)
>>>     output = x + y
>>>     tl.store(out_ptr + offsets, output, mask=mask)
>>>
>>> @triton_op("mylib::add", mutates_args={})
>>> def add(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
>>>     output = torch.empty_like(x)
>>>     n_elements = output.numel()
>>>
>>>     def grid(meta):
>>>         return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),)
>>>
>>>     # NB: we need to wrap the triton kernel in a call to wrap_triton
>>>     wrap_triton(add_kernel)[grid](x, y, output, n_elements, 16)
>>>     return output
>>>
>>> @torch.compile
>>> def f(x, y):
>>>     return add(x, y)
>>>
>>> x = torch.randn(3, device="cuda")
>>> y = torch.randn(3, device="cuda")
>>>
>>> z = f(x, y)
>>> assert torch.allclose(z, x + y)
torch.library.wrap_triton(triton_kernel, /)[source]

允许通过 make_fx 或非严格的torch.export将 triton kernel 捕获到图中。

这些技术执行基于 Dispatcher 的追踪(通过__torch_dispatch__)并且无法看到对原始 triton kernel 的调用。wrap_tritonAPI 将 triton kernel 包装成一个可调用对象,该对象实际上可以被追踪到图中。

请将此 API 与torch.library.triton_op()一起使用。

示例

>>> import torch
>>> import triton
>>> from triton import language as tl
>>> from torch.fx.experimental.proxy_tensor import make_fx
>>> from torch.library import wrap_triton
>>>
>>> @triton.jit
>>> def add_kernel(
>>>     in_ptr0,
>>>     in_ptr1,
>>>     out_ptr,
>>>     n_elements,
>>>     BLOCK_SIZE: "tl.constexpr",
>>> ):
>>>     pid = tl.program_id(axis=0)
>>>     block_start = pid * BLOCK_SIZE
>>>     offsets = block_start + tl.arange(0, BLOCK_SIZE)
>>>     mask = offsets < n_elements
>>>     x = tl.load(in_ptr0 + offsets, mask=mask)
>>>     y = tl.load(in_ptr1 + offsets, mask=mask)
>>>     output = x + y
>>>     tl.store(out_ptr + offsets, output, mask=mask)
>>>
>>> def add(x, y):
>>>     output = torch.empty_like(x)
>>>     n_elements = output.numel()
>>>
>>>     def grid_fn(meta):
>>>         return (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),)
>>>
>>>     wrap_triton(add_kernel)[grid_fn](x, y, output, n_elements, 16)
>>>     return output
>>>
>>> x = torch.randn(3, device="cuda")
>>> y = torch.randn(3, device="cuda")
>>> gm = make_fx(add)(x, y)
>>> print(gm.code)
>>> # def forward(self, x_1, y_1):
>>> #     empty_like = torch.ops.aten.empty_like.default(x_1, pin_memory = False)
>>> #     triton_kernel_wrapper_mutation_proxy = triton_kernel_wrapper_mutation(
>>> #         kernel_idx = 0, constant_args_idx = 0,
>>> #         grid = [(1, 1, 1)], kwargs = {
>>> #             'in_ptr0': x_1, 'in_ptr1': y_1, 'out_ptr': empty_like,
>>> #             'n_elements': 3, 'BLOCK_SIZE': 16
>>> #         })
>>> #     return empty_like
返回类型

Any

扩展自定义算子(通过 Python 或 C++ 创建)

使用 register.* 方法,例如torch.library.register_kernel()torch.library.register_fake(),为任何算子添加实现(它们可能使用torch.library.custom_op()或通过 PyTorch C++ 算子注册 API 创建)。

torch.library.register_kernel(op, device_types, func=None, /, *, lib=None)[source][source]

为此算子的设备类型注册一个实现。

一些有效的 device_types 包括:“cpu”、“cuda”、“xla”、“mps”、“ipu”、“xpu”。此 API 可用作装饰器。

参数
  • op (str | OpOverload) – 要注册实现的算子。

  • device_types (None | str | Sequence[str]) – 要注册实现的 device_types。如果为 None,我们将注册到所有设备类型 – 请仅在你的实现真正与设备类型无关时使用此选项。

  • func (Callable) – 作为给定设备类型的实现进行注册的函数。

  • lib (Optional[Library]) – 如果提供,此注册的生命周期

示例:
>>> import torch
>>> from torch import Tensor
>>> from torch.library import custom_op
>>> import numpy as np
>>>
>>> # Create a custom op that works on cpu
>>> @custom_op("mylib::numpy_sin", mutates_args=(), device_types="cpu")
>>> def numpy_sin(x: Tensor) -> Tensor:
>>>     x_np = x.numpy()
>>>     y_np = np.sin(x_np)
>>>     return torch.from_numpy(y_np)
>>>
>>> # Add implementations for the cuda device
>>> @torch.library.register_kernel("mylib::numpy_sin", "cuda")
>>> def _(x):
>>>     x_np = x.cpu().numpy()
>>>     y_np = np.sin(x_np)
>>>     return torch.from_numpy(y_np).to(device=x.device)
>>>
>>> x_cpu = torch.randn(3)
>>> x_cuda = x_cpu.cuda()
>>> assert torch.allclose(numpy_sin(x_cpu), x_cpu.sin())
>>> assert torch.allclose(numpy_sin(x_cuda), x_cuda.sin())
torch.library.register_autocast(op, device_type, cast_inputs, /, *, lib=None)[source][source]

为此自定义算子注册 autocast 调度规则。

有效的device_type包括:“cpu” 和 “cuda”。

参数
  • op (str | OpOverload) – 要注册 autocast 调度规则的算子。

  • device_type (torch.device) – 要使用的设备类型。“cuda” 或 “cpu”。该类型与torch.devicetype属性相同。因此,你可以使用Tensor.device.type获取 tensor 的设备类型。

  • cast_inputs (torch.dtype) – 当自定义算子在 autocast 启用区域内运行时,将输入的浮点 Tensor 转换为目标 dtype(非浮点 Tensor 不受影响),然后在禁用 autocast 的情况下执行自定义算子。

  • lib (Optional[Library]) – 如果提供,此注册的生命周期

示例:
>>> import torch
>>> from torch import Tensor
>>> from torch.library import custom_op
>>>
>>> # Create a custom op that works on cuda
>>> @torch.library.custom_op("mylib::my_sin", mutates_args=())
>>> def my_sin(x: Tensor) -> Tensor:
>>>     return torch.sin(x)
>>>
>>> # Register autocast dispatch rule for the cuda device
>>> torch.library.register_autocast("mylib::my_sin", "cuda", torch.float16)
>>>
>>> x = torch.randn(3, dtype=torch.float32, device="cuda")
>>> with torch.autocast("cuda", dtype=torch.float16):
>>>     y = torch.ops.mylib.my_sin(x)
>>> assert y.dtype == torch.float16
torch.library.register_autograd(op, backward, /, *, setup_context=None, lib=None)[source][source]

为此自定义算子注册反向传播公式。

为了使算子与 autograd 协同工作,你需要注册一个反向传播公式:1. 你必须通过提供一个“backward”函数来告诉我们如何在反向传播过程中计算梯度。2. 如果你需要从正向传播中获取任何值来计算梯度,可以使用setup_context来保存用于反向传播的值。

backward在反向传播过程中运行。它接受(ctx, *grads):-grads是一个或多个梯度。梯度的数量与算子的输出数量相匹配。ctx对象是torch.autograd.Function使用的相同的 ctx 对象backward_fn的语义与torch.autograd.Function.backward()相同。

setup_context(ctx, inputs, output)在正向传播过程中运行。请将反向传播所需的数据量保存到ctx对象上,可以通过torch.autograd.function.FunctionCtx.save_for_backward()或将它们作为ctx的属性来保存。如果你的自定义算子有仅关键字参数,我们预期setup_context的签名是setup_context(ctx, inputs, keyword_only_inputs, output)

Both setup_context_fn and backward_fn must be traceable. That is, they may not directly access torch.Tensor.data_ptr() and they must not depend on or mutate global state. If you need a non-traceable backward, you can make it a separate custom_op that you call inside backward_fn.

If you need different autograd behavior on different devices, then we recommend creating two different custom operators, one for each device that needs different behavior, and switching between them at runtime.

示例

>>> import torch
>>> import numpy as np
>>> from torch import Tensor
>>>
>>> @torch.library.custom_op("mylib::numpy_sin", mutates_args=())
>>> def numpy_sin(x: Tensor) -> Tensor:
>>>     x_np = x.cpu().numpy()
>>>     y_np = np.sin(x_np)
>>>     return torch.from_numpy(y_np).to(device=x.device)
>>>
>>> def setup_context(ctx, inputs, output) -> Tensor:
>>>     x, = inputs
>>>     ctx.save_for_backward(x)
>>>
>>> def backward(ctx, grad):
>>>     x, = ctx.saved_tensors
>>>     return grad * x.cos()
>>>
>>> torch.library.register_autograd(
...     "mylib::numpy_sin", backward, setup_context=setup_context
... )
>>>
>>> x = torch.randn(3, requires_grad=True)
>>> y = numpy_sin(x)
>>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y))
>>> assert torch.allclose(grad_x, x.cos())
>>>
>>> # Example with a keyword-only arg
>>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=())
>>> def numpy_mul(x: Tensor, *, val: float) -> Tensor:
>>>     x_np = x.cpu().numpy()
>>>     y_np = x_np * val
>>>     return torch.from_numpy(y_np).to(device=x.device)
>>>
>>> def setup_context(ctx, inputs, keyword_only_inputs, output) -> Tensor:
>>>     ctx.val = keyword_only_inputs["val"]
>>>
>>> def backward(ctx, grad):
>>>     return grad * ctx.val
>>>
>>> torch.library.register_autograd(
...     "mylib::numpy_mul", backward, setup_context=setup_context
... )
>>>
>>> x = torch.randn(3, requires_grad=True)
>>> y = numpy_mul(x, val=3.14)
>>> (grad_x,) = torch.autograd.grad(y, x, torch.ones_like(y))
>>> assert torch.allclose(grad_x, torch.full_like(x, 3.14))
torch.library.register_fake(op, func=None, /, *, lib=None, _stacklevel=1)[source][source]

Register a FakeTensor implementation (“fake impl”) for this operator.

Also sometimes known as a “meta kernel”, “abstract impl”.

An “FakeTensor implementation” specifies the behavior of this operator on Tensors that carry no data (“FakeTensor”). Given some input Tensors with certain properties (sizes/strides/storage_offset/device), it specifies what the properties of the output Tensors are.

The FakeTensor implementation has the same signature as the operator. It is run for both FakeTensors and meta tensors. To write a FakeTensor implementation, assume that all Tensor inputs to the operator are regular CPU/CUDA/Meta tensors, but they do not have storage, and you are trying to return regular CPU/CUDA/Meta tensor(s) as output. The FakeTensor implementation must consist of only PyTorch operations (and may not directly access the storage or data of any input or intermediate Tensors).

This API may be used as a decorator (see examples).

For a detailed guide on custom ops, please see https://pytorch.ac.cn/tutorials/advanced/custom_ops_landing_page.html

示例

>>> import torch
>>> import numpy as np
>>> from torch import Tensor
>>>
>>> # Example 1: an operator without data-dependent output shape
>>> @torch.library.custom_op("mylib::custom_linear", mutates_args=())
>>> def custom_linear(x: Tensor, weight: Tensor, bias: Tensor) -> Tensor:
>>>     raise NotImplementedError("Implementation goes here")
>>>
>>> @torch.library.register_fake("mylib::custom_linear")
>>> def _(x, weight, bias):
>>>     assert x.dim() == 2
>>>     assert weight.dim() == 2
>>>     assert bias.dim() == 1
>>>     assert x.shape[1] == weight.shape[1]
>>>     assert weight.shape[0] == bias.shape[0]
>>>     assert x.device == weight.device
>>>
>>>     return (x @ weight.t()) + bias
>>>
>>> with torch._subclasses.fake_tensor.FakeTensorMode():
>>>     x = torch.randn(2, 3)
>>>     w = torch.randn(3, 3)
>>>     b = torch.randn(3)
>>>     y = torch.ops.mylib.custom_linear(x, w, b)
>>>
>>> assert y.shape == (2, 3)
>>>
>>> # Example 2: an operator with data-dependent output shape
>>> @torch.library.custom_op("mylib::custom_nonzero", mutates_args=())
>>> def custom_nonzero(x: Tensor) -> Tensor:
>>>     x_np = x.numpy(force=True)
>>>     res = np.stack(np.nonzero(x_np), axis=1)
>>>     return torch.tensor(res, device=x.device)
>>>
>>> @torch.library.register_fake("mylib::custom_nonzero")
>>> def _(x):
>>> # Number of nonzero-elements is data-dependent.
>>> # Since we cannot peek at the data in an fake impl,
>>> # we use the ctx object to construct a new symint that
>>> # represents the data-dependent size.
>>>     ctx = torch.library.get_ctx()
>>>     nnz = ctx.new_dynamic_size()
>>>     shape = [nnz, x.dim()]
>>>     result = x.new_empty(shape, dtype=torch.int64)
>>>     return result
>>>
>>> from torch.fx.experimental.proxy_tensor import make_fx
>>>
>>> x = torch.tensor([0, 1, 2, 3, 4, 0])
>>> trace = make_fx(torch.ops.mylib.custom_nonzero, tracing_mode="symbolic")(x)
>>> trace.print_readable()
>>>
>>> assert torch.allclose(trace(x), torch.ops.mylib.custom_nonzero(x))
torch.library.register_vmap(op, func=None, /, *, lib=None)[source][source]

Register a vmap implementation to support torch.vmap() for this custom op.

This API may be used as a decorator (see examples).

In order for an operator to work with torch.vmap(), you may need to register a vmap implementation in the following signature

vmap_func(info, in_dims: Tuple[Optional[int]], *args, **kwargs),

where *args and **kwargs are the arguments and kwargs for op. We do not support kwarg-only Tensor args.

It specifies how do we compute the batched version of op given inputs with an additional dimension (specified by in_dims).

For each arg in args, in_dims has a corresponding Optional[int]. It is None if the arg is not a Tensor or if the arg is not being vmapped over, otherwise, it is an integer specifying what dimension of the Tensor is being vmapped over.

info is a collection of additional metadata that may be helpful: info.batch_size specifies the size of the dimension being vmapped over, while info.randomness is the randomness option that was passed to torch.vmap().

The return of the function func is a tuple of (output, out_dims). Similar to in_dims, out_dims should be of the same structure as output and contain one out_dim per output that specifies if the output has the vmapped dimension and what index it is in.

示例

>>> import torch
>>> import numpy as np
>>> from torch import Tensor
>>> from typing import Tuple
>>>
>>> def to_numpy(tensor):
>>>     return tensor.cpu().numpy()
>>>
>>> lib = torch.library.Library("mylib", "FRAGMENT")
>>> @torch.library.custom_op("mylib::numpy_cube", mutates_args=())
>>> def numpy_cube(x: Tensor) -> Tuple[Tensor, Tensor]:
>>>     x_np = to_numpy(x)
>>>     dx = torch.tensor(3 * x_np ** 2, device=x.device)
>>>     return torch.tensor(x_np ** 3, device=x.device), dx
>>>
>>> def numpy_cube_vmap(info, in_dims, x):
>>>     result = numpy_cube(x)
>>>     return result, (in_dims[0], in_dims[0])
>>>
>>> torch.library.register_vmap(numpy_cube, numpy_cube_vmap)
>>>
>>> x = torch.randn(3)
>>> torch.vmap(numpy_cube)(x)
>>>
>>> @torch.library.custom_op("mylib::numpy_mul", mutates_args=())
>>> def numpy_mul(x: Tensor, y: Tensor) -> Tensor:
>>>     return torch.tensor(to_numpy(x) * to_numpy(y), device=x.device)
>>>
>>> @torch.library.register_vmap("mylib::numpy_mul")
>>> def numpy_mul_vmap(info, in_dims, x, y):
>>>     x_bdim, y_bdim = in_dims
>>>     x = x.movedim(x_bdim, -1) if x_bdim is not None else x.unsqueeze(-1)
>>>     y = y.movedim(y_bdim, -1) if y_bdim is not None else y.unsqueeze(-1)
>>>     result = x * y
>>>     result = result.movedim(-1, 0)
>>>     return result, 0
>>>
>>>
>>> x = torch.randn(3)
>>> y = torch.randn(3)
>>> torch.vmap(numpy_mul)(x, y)

注意

The vmap function should aim to preserve the semantics of the entire custom operator. That is, grad(vmap(op)) should be replaceable with a grad(map(op)).

If your custom operator has any custom behavior in the backward pass, please keep this in mind.

torch.library.impl_abstract(qualname, func=None, *, lib=None, _stacklevel=1)[source][source]

This API was renamed to torch.library.register_fake() in PyTorch 2.4. Please use that instead.

torch.library.get_ctx()[source][source]

get_ctx() returns the current AbstractImplCtx object.

Calling get_ctx() is only valid inside of an fake impl (see torch.library.register_fake() for more usage details.

返回类型

FakeImplCtx

torch.library.register_torch_dispatch(op, torch_dispatch_class, func=None, /, *, lib=None)[source][source]

Registers a torch_dispatch rule for the given operator and torch_dispatch_class.

This allows for open registration to specify the behavior between the operator and the torch_dispatch_class without needing to modify the torch_dispatch_class or the operator directly.

The torch_dispatch_class is either a Tensor subclass with __torch_dispatch__ or a TorchDispatchMode.

If it is a Tensor subclass, we expect func to have the following signature: (cls, func: OpOverload, types: Tuple[type, ...], args, kwargs) -> Any

If it is a TorchDispatchMode, we expect func to have the following signature: (mode, func: OpOverload, types: Tuple[type, ...], args, kwargs) -> Any

args and kwargs will have been normalized the same way they are in __torch_dispatch__ (see __torch_dispatch__ calling convention).

示例

>>> import torch
>>>
>>> @torch.library.custom_op("mylib::foo", mutates_args={})
>>> def foo(x: torch.Tensor) -> torch.Tensor:
>>>     return x.clone()
>>>
>>> class MyMode(torch.utils._python_dispatch.TorchDispatchMode):
>>>     def __torch_dispatch__(self, func, types, args=(), kwargs=None):
>>>         return func(*args, **kwargs)
>>>
>>> @torch.library.register_torch_dispatch("mylib::foo", MyMode)
>>> def _(mode, func, types, args, kwargs):
>>>     x, = args
>>>     return x + 1
>>>
>>> x = torch.randn(3)
>>> y = foo(x)
>>> assert torch.allclose(y, x)
>>>
>>> with MyMode():
>>>     y = foo(x)
>>> assert torch.allclose(y, x + 1)
torch.library.infer_schema(prototype_function, /, *, mutates_args, op_name=None)[source]

Parses the schema of a given function with type hints. The schema is inferred from the function’s type hints, and can be used to define a new operator.

We make the following assumptions

  • None of the outputs alias any of the inputs or each other.

  • String type annotations “device, dtype, Tensor, types” without library specification are
    assumed to be torch.*. Similarly, string type annotations “Optional, List, Sequence, Union”
    without library specification are assumed to be typing.*.
  • Only the args listed in mutates_args are being mutated. If mutates_args is “unknown”,
    it assumes that all inputs to the operator are being mutates.

Callers (e.g. the custom ops API) are responsible for checking these assumptions.

参数
  • prototype_function (Callable) – The function from which to infer a schema for from its type annotations.

  • op_name (Optional[str]) – The name of the operator in the schema. If name is None, then the name is not included in the inferred schema. Note that the input schema to torch.library.Library.define requires a operator name.

  • mutates_args ("unknown" | Iterable[str]) – The arguments that are mutated in the function.

Returns

The inferred schema.

返回类型

str

示例

>>> def foo_impl(x: torch.Tensor) -> torch.Tensor:
>>>     return x.sin()
>>>
>>> infer_schema(foo_impl, op_name="foo", mutates_args={})
foo(Tensor x) -> Tensor
>>>
>>> infer_schema(foo_impl, mutates_args={})
(Tensor x) -> Tensor
class torch._library.custom_ops.CustomOpDef(namespace, name, schema, fn)[source][source]

CustomOpDef is a wrapper around a function that turns it into a custom op.

It has various methods for registering additional behavior for this custom op.

You should not instantiate CustomOpDef directly; instead, use the torch.library.custom_op() API.

set_kernel_enabled(device_type, enabled=True)[source][source]

Disable or re-enable an already registered kernel for this custom operator.

If the kernel is already disabled/enabled, this is a no-op.

注意

If a kernel is first disabled and then registered, it is disabled until enabled again.

参数
  • device_type (str) – The device type to disable/enable the kernel for.

  • enabled (bool) – Whether to disable or enable the kernel.

示例

>>> inp = torch.randn(1)
>>>
>>> # define custom op `f`.
>>> @custom_op("mylib::f", mutates_args=())
>>> def f(x: Tensor) -> Tensor:
>>>     return torch.zeros(1)
>>>
>>> print(f(inp))  # tensor([0.]), default kernel
>>>
>>> @f.register_kernel("cpu")
>>> def _(x):
>>>     return torch.ones(1)
>>>
>>> print(f(inp))  # tensor([1.]), CPU kernel
>>>
>>> # temporarily disable the CPU kernel
>>> with f.set_kernel_enabled("cpu", enabled = False):
>>>     print(f(inp))  # tensor([0.]) with CPU kernel disabled

Low-level APIs

The following APIs are direct bindings to PyTorch’s C++ low-level operator registration APIs.

警告

The low-level operator registration APIs and the PyTorch Dispatcher are a complicated PyTorch concept. We recommend you use the higher level APIs above (that do not require a torch.library.Library object) when possible. This blog post <http://blog.ezyang.com/2020/09/lets-talk-about-the-pytorch-dispatcher/>`_ is a good starting point to learn about the PyTorch Dispatcher.

A tutorial that walks you through some examples on how to use this API is available on Google Colab.

class torch.library.Library(ns, kind, dispatch_key='')[source][source]

A class to create libraries that can be used to register new operators or override operators in existing libraries from Python. A user can optionally pass in a dispatch keyname if they only want to register kernels corresponding to only one specific dispatch key.

To create a library to override operators in an existing library (with name ns), set the kind to “IMPL”. To create a new library (with name ns) to register new operators, set the kind to “DEF”. To create a fragment of a possibly existing library to register operators (and bypass the limitation that there is only one library for a given namespace), set the kind to “FRAGMENT”.

参数
  • ns – library name

  • kind – “DEF”, “IMPL” (default: “IMPL”), “FRAGMENT”

  • dispatch_key – PyTorch dispatch key (default: “”)

define(schema, alias_analysis='', *, tags=())[source][source]

Defines a new operator and its semantics in the ns namespace.

参数
  • schema – function schema to define a new operator.

  • alias_analysis (optional) – Indicates if the aliasing properties of the operator arguments can be inferred from the schema (default behavior) or not (“CONSERVATIVE”).

  • tags (Tag | Sequence[Tag]) – one or more torch.Tag to apply to this operator. Tagging an operator changes the operator’s behavior under various PyTorch subsystems; please read the docs for the torch.Tag carefully before applying it.

Returns

name of the operator as inferred from the schema.

Example:
>>> my_lib = Library("mylib", "DEF")
>>> my_lib.define("sum(Tensor self) -> Tensor")
fallback(fn, dispatch_key='', *, with_keyset=False)[source][source]

Registers the function implementation as the fallback for the given key.

This function only works for a library with global namespace (“_”).

参数
  • fn – function used as fallback for the given dispatch key or fallthrough_kernel() to register a fallthrough.

  • dispatch_key – dispatch key that the input function should be registered for. By default, it uses the dispatch key that the library was created with.

  • with_keyset – flag controlling if the current dispatcher call keyset should be passed as the first argument to fn when calling. This should be used to create the appropriate keyset for redispatch calls.

Example:
>>> my_lib = Library("_", "IMPL")
>>> def fallback_kernel(op, *args, **kwargs):
>>>     # Handle all autocast ops generically
>>>     # ...
>>> my_lib.fallback(fallback_kernel, "Autocast")
impl(op_name, fn, dispatch_key='', *, with_keyset=False)[source][source]

Registers the function implementation for an operator defined in the library.

参数
  • op_name – operator name (along with the overload) or OpOverload object.

  • fn – function that’s the operator implementation for the input dispatch key or fallthrough_kernel() to register a fallthrough.

  • dispatch_key – dispatch key that the input function should be registered for. By default, it uses the dispatch key that the library was created with.

  • with_keyset – flag controlling if the current dispatcher call keyset should be passed as the first argument to fn when calling. This should be used to create the appropriate keyset for redispatch calls.

Example:
>>> my_lib = Library("aten", "IMPL")
>>> def div_cpu(self, other):
>>>     return self * (1 / other)
>>> my_lib.impl("div.Tensor", div_cpu, "CPU")
torch.library.fallthrough_kernel()[source][source]

A dummy function to pass to Library.impl in order to register a fallthrough.

torch.library.define(qualname, schema, *, lib=None, tags=())[source][source]
torch.library.define(lib, schema, alias_analysis='')

Defines a new operator.

In PyTorch, defining an op (short for “operator”) is a two step-process: - we need to define the op (by providing an operator name and schema) - we need to implement behavior for how the operator interacts with various PyTorch subsystems, like CPU/CUDA Tensors, Autograd, etc.

This entrypoint defines the custom operator (the first step) you must then perform the second step by calling various impl_* APIs, like torch.library.impl() or torch.library.register_fake().

参数
  • qualname (str) – The qualified name for the operator. Should be a string that looks like “namespace::name”, e.g. “aten::sin”. Operators in PyTorch need a namespace to avoid name collisions; a given operator may only be created once. If you are writing a Python library, we recommend the namespace to be the name of your top-level module.

  • schema (str) – The schema of the operator. E.g. “(Tensor x) -> Tensor” for an op that accepts one Tensor and returns one Tensor. It does not contain the operator name (that is passed in qualname).

  • lib (Optional[Library]) – If provided, the lifetime of this operator will be tied to the lifetime of the Library object.

  • tags (Tag | Sequence[Tag]) – one or more torch.Tag to apply to this operator. Tagging an operator changes the operator’s behavior under various PyTorch subsystems; please read the docs for the torch.Tag carefully before applying it.

Example:
>>> import torch
>>> import numpy as np
>>>
>>> # Define the operator
>>> torch.library.define("mylib::sin", "(Tensor x) -> Tensor")
>>>
>>> # Add implementations for the operator
>>> @torch.library.impl("mylib::sin", "cpu")
>>> def f(x):
>>>     return torch.from_numpy(np.sin(x.numpy()))
>>>
>>> # Call the new operator from torch.ops.
>>> x = torch.randn(3)
>>> y = torch.ops.mylib.sin(x)
>>> assert torch.allclose(y, x.sin())
torch.library.impl(lib, name, dispatch_key='')[source][source]
torch.library.impl(qualname: str, types: Union[str, Sequence[str]], func: Literal[None] = None, *, lib: Optional[Library] = None) Callable[[Callable[..., object]], None]
torch.library.impl(qualname: str, types: Union[str, Sequence[str]], func: Callable[..., object], *, lib: Optional[Library] = None) None
torch.library.impl(lib: Library, name: str, dispatch_key: str = '') Callable[[Callable[_P, _T]], Callable[_P, _T]]

为此算子的设备类型注册一个实现。

您可以为 types 传递“default”来将此实现注册为适用于所有设备类型的默认实现。仅当实现真正支持所有设备类型时才使用此选项;例如,如果它是内置 PyTorch 算子的组合,则情况属实。

这个 API 可以用作装饰器。你可以使用嵌套的装饰器,前提是它们返回一个函数并放置在此 API 内部(参见示例 2)。

一些有效的类型包括:“cpu”、“cuda”、“xla”、“mps”、“ipu”、“xpu”。

参数
  • qualname (str) – 应为一个形似“namespace::operator_name”的字符串。

  • types (str | Sequence[str]) – 要注册实现的设备类型。

  • lib (Optional[Library]) – 如果提供,此注册的生命周期将与 Library 对象的生命周期绑定。

示例

>>> import torch
>>> import numpy as np
>>> # Example 1: Register function.
>>> # Define the operator
>>> torch.library.define("mylib::mysin", "(Tensor x) -> Tensor")
>>>
>>> # Add implementations for the cpu device
>>> @torch.library.impl("mylib::mysin", "cpu")
>>> def f(x):
>>>     return torch.from_numpy(np.sin(x.numpy()))
>>>
>>> x = torch.randn(3)
>>> y = torch.ops.mylib.mysin(x)
>>> assert torch.allclose(y, x.sin())
>>>
>>> # Example 2: Register function with decorator.
>>> def custom_decorator(func):
>>>     def wrapper(*args, **kwargs):
>>>         return func(*args, **kwargs) + 1
>>>     return wrapper
>>>
>>> # Define the operator
>>> torch.library.define("mylib::sin_plus_one", "(Tensor x) -> Tensor")
>>>
>>> # Add implementations for the operator
>>> @torch.library.impl("mylib::sin_plus_one", "cpu")
>>> @custom_decorator
>>> def f(x):
>>>     return torch.from_numpy(np.sin(x.numpy()))
>>>
>>> # Call the new operator from torch.ops.
>>> x = torch.randn(3)
>>>
>>> y1 = torch.ops.mylib.sin_plus_one(x)
>>> y2 = torch.sin(x) + 1
>>> assert torch.allclose(y1, y2)

文档

访问 PyTorch 开发者文档大全

查看文档

教程

获取面向初学者和高级开发人员的深度教程

查看教程

资源

查找开发资源并解答您的疑问

查看资源