快捷方式

ActorCriticOperator

torchrl.modules.tensordict_module.ActorCriticOperator(*args, **kwargs)[源]

Actor-critic 操作符。

该类将一个 actor 和一个 value 模型包装在一起,它们共享一个通用的观察嵌入网络

../../_images/aafig-79381044a8773741ed8c83c5de90ab4def5c10b2.svg

注意

对于返回 action 和状态值 \(V(s)\) 的类似类,请参阅 ActorValueOperator

为了简化工作流程,此类提供了一个 get_policy_operator() 方法,该方法将返回一个具有专用功能的独立 TDModule。get_critic_operator 将返回父对象,因为 value 是根据 policy 的输出计算的。

参数
  • common_operator (TensorDictModule) – 读取观察结果并产生隐藏变量的通用操作符

  • policy_operator (TensorDictModule) – 读取隐藏变量并返回 action 的策略操作符

  • value_operator (TensorDictModule) – 读取隐藏变量并返回 value 的值操作符

示例

>>> import torch
>>> from tensordict import TensorDict
>>> from torchrl.modules import ProbabilisticActor
>>> from torchrl.modules import  ValueOperator, TanhNormal, ActorCriticOperator, NormalParamExtractor, MLP
>>> module_hidden = torch.nn.Linear(4, 4)
>>> td_module_hidden = SafeModule(
...    module=module_hidden,
...    in_keys=["observation"],
...    out_keys=["hidden"],
...    )
>>> module_action = nn.Sequential(torch.nn.Linear(4, 8), NormalParamExtractor())
>>> module_action = TensorDictModule(module_action, in_keys=["hidden"], out_keys=["loc", "scale"])
>>> td_module_action = ProbabilisticActor(
...    module=module_action,
...    in_keys=["loc", "scale"],
...    out_keys=["action"],
...    distribution_class=TanhNormal,
...    return_log_prob=True,
...    )
>>> module_value = MLP(in_features=8, out_features=1, num_cells=[])
>>> td_module_value = ValueOperator(
...    module=module_value,
...    in_keys=["hidden", "action"],
...    out_keys=["state_action_value"],
...    )
>>> td_module = ActorCriticOperator(td_module_hidden, td_module_action, td_module_value)
>>> td = TensorDict({"observation": torch.randn(3, 4)}, [3,])
>>> td_clone = td_module(td.clone())
>>> print(td_clone)
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        hidden: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        state_action_value: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
>>> td_clone = td_module.get_policy_operator()(td.clone())
>>> print(td_clone)  # no value
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        hidden: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
>>> td_clone = td_module.get_critic_operator()(td.clone())
>>> print(td_clone)  # no action
TensorDict(
    fields={
        action: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        hidden: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        loc: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        observation: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        sample_log_prob: Tensor(shape=torch.Size([3]), device=cpu, dtype=torch.float32, is_shared=False),
        scale: Tensor(shape=torch.Size([3, 4]), device=cpu, dtype=torch.float32, is_shared=False),
        state_action_value: Tensor(shape=torch.Size([3, 1]), device=cpu, dtype=torch.float32, is_shared=False)},
    batch_size=torch.Size([3]),
    device=None,
    is_shared=False)
get_critic_operator() TensorDictModuleWrapper[源]

返回一个独立的 critic 网络操作符,它将状态-动作对映射到 critic 估计值。

get_policy_head() SafeSequential[源]

返回 policy head。

get_value_head() SafeSequential[源]

返回 value head。

get_value_operator() TensorDictModuleWrapper[源]

返回一个独立的 value 网络操作符,它将观察结果映射到 value 估计值。


© Copyright 2022, Meta。

使用 Sphinx 构建,主题由 Read the Docs 提供。

文档

访问 PyTorch 的完整开发者文档

查看文档

教程

获取针对初学者和高级开发者的深度教程

查看教程

资源

查找开发资源并获得问题解答

查看资源