快捷方式

torch.Tensor.is_leaf

Tensor.is_leaf

根据惯例,所有具有 requires_gradFalse 的张量都将是叶子张量。

对于具有 requires_gradTrue 的张量,如果它们是由用户创建的,则它们将是叶子张量。这意味着它们不是操作的结果,因此 grad_fn 为 None。

只有叶子张量的 grad 会在调用 backward() 时被填充。要使非叶子张量的 grad 被填充,可以使用 retain_grad()

示例

>>> a = torch.rand(10, requires_grad=True)
>>> a.is_leaf
True
>>> b = torch.rand(10, requires_grad=True).cuda()
>>> b.is_leaf
False
# b was created by the operation that cast a cpu Tensor into a cuda Tensor
>>> c = torch.rand(10, requires_grad=True) + 2
>>> c.is_leaf
False
# c was created by the addition operation
>>> d = torch.rand(10).cuda()
>>> d.is_leaf
True
# d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)
>>> e = torch.rand(10).cuda().requires_grad_()
>>> e.is_leaf
True
# e requires gradients and has no operations creating it
>>> f = torch.rand(10, requires_grad=True, device="cuda")
>>> f.is_leaf
True
# f requires grad, has no operation creating it

文档

访问 PyTorch 的全面开发者文档

查看文档

教程

获取针对初学者和高级开发人员的深度教程

查看教程

资源

查找开发资源并获得问题的解答

查看资源