DdpgMlpActor¶
- class torchrl.modules.DdpgMlpActor(action_dim: int, mlp_net_kwargs: Optional[dict] = None, device: Optional[Union[device, str, int]] = None)[source]¶
DDPG Actor 类。
在“使用深度强化学习进行连续控制”中提出,https://arxiv.org/pdf/1509.02971.pdf
DDPG Actor 接受观测向量作为输入,并从中返回一个动作。它被训练为最大化 DDPG Q 值网络返回的值。
- 参数:
action_dim (int) – 动作向量的长度
mlp_net_kwargs (dict, optional) –
MLP 的 kwargs。默认为
>>> { ... 'in_features': None, ... 'out_features': action_dim, ... 'depth': 2, ... 'num_cells': [400, 300], ... 'activation_class': nn.ELU, ... 'bias_last_layer': True, ... }
device (torch.device, optional) – 在其上创建模块的设备。
示例
>>> import torch >>> from torchrl.modules import DdpgMlpActor >>> actor = DdpgMlpActor(action_dim=4) >>> print(actor) DdpgMlpActor( (mlp): MLP( (0): LazyLinear(in_features=0, out_features=400, bias=True) (1): ELU(alpha=1.0) (2): Linear(in_features=400, out_features=300, bias=True) (3): ELU(alpha=1.0) (4): Linear(in_features=300, out_features=4, bias=True) ) ) >>> obs = torch.zeros(10, 6) >>> action = actor(obs) >>> print(action.shape) torch.Size([10, 4])