torch.func 闪电战¶
什么是 torch.func?¶
torch.func(以前称为 functorch)是一个用于在 PyTorch 中进行类似于 JAX 的可组合函数转换的库。
“函数转换”是一个高阶函数,它接受一个数值函数,并返回一个计算不同数量的新函数。
torch.func 具有自动微分转换(
grad(f)
返回一个计算f
梯度的函数)、向量化/批处理转换(vmap(f)
返回一个计算f
在输入批次上的函数)等等。这些函数转换可以任意地相互组合。例如,组合
vmap(grad(f))
计算一个名为每样本梯度的量,而当前的 PyTorch 无法有效地计算该量。
为什么使用可组合函数转换?¶
目前在 PyTorch 中,有一些用例很难实现:- 计算每样本梯度(或其他每样本量)
在单台机器上运行模型集合
在 MAML 内循环中有效地将任务批处理在一起
有效地计算雅可比矩阵和海森矩阵
有效地计算批处理的雅可比矩阵和海森矩阵
组合 vmap()
、grad()
、vjp()
和 jvp()
转换使我们可以表达上述内容,而无需为每个内容设计单独的子系统。
有哪些转换?¶
grad()
(梯度计算)¶
grad(func)
是我们的梯度计算转换。它返回一个计算 func
梯度的函数。它假设 func
返回一个单元素张量,默认情况下,它计算 func
输出相对于第一个输入的梯度。
import torch
from torch.func import grad
x = torch.randn([])
cos_x = grad(lambda x: torch.sin(x))(x)
assert torch.allclose(cos_x, x.cos())
# Second-order gradients
neg_sin_x = grad(grad(lambda x: torch.sin(x)))(x)
assert torch.allclose(neg_sin_x, -x.sin())
vmap()
(自动向量化)¶
注意:vmap()
对可以使用的代码有一些限制。有关更多详细信息,请参阅 UX 限制。
vmap(func)(*inputs)
是一种转换,它向 func
中的所有张量运算添加了一个维度。 vmap(func)
返回一个新的函数,该函数将 func
映射到输入中每个张量的某个维度(默认值:0)。
vmap 用于隐藏批处理维度:可以编写一个在示例上运行的函数 func,然后使用 vmap(func)
将其提升为可以接收示例批次的函数,从而简化建模体验
import torch
from torch.func import vmap
batch_size, feature_size = 3, 5
weights = torch.randn(feature_size, requires_grad=True)
def model(feature_vec):
# Very simple linear model with activation
assert feature_vec.dim() == 1
return feature_vec.dot(weights).relu()
examples = torch.randn(batch_size, feature_size)
result = vmap(model)(examples)
当与 grad()
组合时,vmap()
可用于计算每样本梯度
from torch.func import vmap
batch_size, feature_size = 3, 5
def model(weights,feature_vec):
# Very simple linear model with activation
assert feature_vec.dim() == 1
return feature_vec.dot(weights).relu()
def compute_loss(weights, example, target):
y = model(weights, example)
return ((y - target) ** 2).mean() # MSELoss
weights = torch.randn(feature_size, requires_grad=True)
examples = torch.randn(batch_size, feature_size)
targets = torch.randn(batch_size)
inputs = (weights,examples, targets)
grad_weight_per_example = vmap(grad(compute_loss), in_dims=(None, 0, 0))(*inputs)
vjp()
(向量-雅可比矩阵积)¶
The vjp()
transform applies func
to inputs
and returns a new function that computes the vector-Jacobian product (vjp) given some cotangents
Tensors.
from torch.func import vjp
inputs = torch.randn(3)
func = torch.sin
cotangents = (torch.randn(3),)
outputs, vjp_fn = vjp(func, inputs); vjps = vjp_fn(*cotangents)
jvp()
(Jacobian-vector product)¶
The jvp()
transforms computes Jacobian-vector-products and is also known as “forward-mode AD”. It is not a higher-order function unlike most other transforms, but it returns the outputs of func(inputs)
as well as the jvps.
from torch.func import jvp
x = torch.randn(5)
y = torch.randn(5)
f = lambda x, y: (x * y)
_, out_tangent = jvp(f, (x, y), (torch.ones(5), torch.ones(5)))
assert torch.allclose(out_tangent, x + y)
jacrev()
, jacfwd()
, and hessian()
¶
The jacrev()
transform returns a new function that takes in x
and returns the Jacobian of the function with respect to x
using reverse-mode AD.
from torch.func import jacrev
x = torch.randn(5)
jacobian = jacrev(torch.sin)(x)
expected = torch.diag(torch.cos(x))
assert torch.allclose(jacobian, expected)
jacrev()
can be composed with vmap()
to produce batched jacobians
x = torch.randn(64, 5)
jacobian = vmap(jacrev(torch.sin))(x)
assert jacobian.shape == (64, 5, 5)
jacfwd()
is a drop-in replacement for jacrev that computes Jacobians using forward-mode AD
from torch.func import jacfwd
x = torch.randn(5)
jacobian = jacfwd(torch.sin)(x)
expected = torch.diag(torch.cos(x))
assert torch.allclose(jacobian, expected)
Composing jacrev()
with itself or jacfwd()
can produce hessians
def f(x):
return x.sin().sum()
x = torch.randn(5)
hessian0 = jacrev(jacrev(f))(x)
hessian1 = jacfwd(jacrev(f))(x)
hessian()
is a convenience function that combines jacfwd and jacrev
from torch.func import hessian
def f(x):
return x.sin().sum()
x = torch.randn(5)
hess = hessian(f)(x)