QMixer¶
- class torchrl.modules.QMixer(state_shape: Union[Tuple[int, ...], Size], mixing_embed_dim: int, n_agents: int, device: Union[device, str, int])[源代码]¶
QMix 混合器。
通过一个单调超网络将智能体的本地 Q 值混合成全局 Q 值,该网络的参数是从全局状态中获取的。出自论文 https://arxiv.org/abs/1803.11485 。
它将每个智能体选择的动作的本地值(形状为 (*B, self.n_agents, 1))转换为全局值,形状为 (*B, 1)。与
torchrl.objectives.QMixerLoss
一起使用。有关示例,请参见 examples/multiagent/qmix_vdn.py。- 参数:
state_shape (元组 或 torch.Size) – 状态的形状(不包括最终的领先批次维度)。
mixing_embed_dim (int) – 混合嵌入维度的尺寸。
n_agents (int) – 智能体数量。
device (str 或 torch.Device) – 网络的火炬设备。
示例
>>> import torch >>> from tensordict import TensorDict >>> from tensordict.nn import TensorDictModule >>> from torchrl.modules.models.multiagent import QMixer >>> n_agents = 4 >>> qmix = TensorDictModule( ... module=QMixer( ... state_shape=(64, 64, 3), ... mixing_embed_dim=32, ... n_agents=n_agents, ... device="cpu", ... ), ... in_keys=[("agents", "chosen_action_value"), "state"], ... out_keys=["chosen_action_value"], ... ) >>> td = TensorDict({"agents": TensorDict({"chosen_action_value": torch.zeros(32, n_agents, 1)}, [32, n_agents]), "state": torch.zeros(32, 64, 64, 3)}, [32]) >>> td TensorDict( fields={ agents: TensorDict( fields={ chosen_action_value: Tensor(shape=torch.Size([32, 4, 1]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32, 4]), device=None, is_shared=False), state: Tensor(shape=torch.Size([32, 64, 64, 3]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32]), device=None, is_shared=False) >>> vdn(td) TensorDict( fields={ agents: TensorDict( fields={ chosen_action_value: Tensor(shape=torch.Size([32, 4, 1]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32, 4]), device=None, is_shared=False), chosen_action_value: Tensor(shape=torch.Size([32, 1]), device=cpu, dtype=torch.float32, is_shared=False), state: Tensor(shape=torch.Size([32, 64, 64, 3]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([32]), device=None, is_shared=False)