IQLLoss¶
- class torchrl.objectives.IQLLoss(*args, **kwargs)[source]¶
TorchRL 中 IQL 损失的实现。
在“离线强化学习与隐式 Q 学习”中提出 https://arxiv.org/abs/2110.06169
- 参数:
actor_network (ProbabilisticActor) – 随机 actor
qvalue_network (TensorDictModule) –
Q(s, a) 参数模型 如果提供了单个 qvalue_network 实例,它将被复制
num_qvalue_nets
次。如果传递了模块列表,除非它们共享相同的身份(在这种情况下,原始参数将被扩展),否则它们的参数将被堆叠。警告
当传递参数列表时,它将__不__与策略参数进行比较,并且所有参数都将被视为未绑定的。
value_network (TensorDictModule, optional) – V(s) 参数模型。
- 关键字参数:
num_qvalue_nets (integer, optional) – 使用的 Q 值网络数量。默认为
2
。loss_function (str, optional) – 与值函数损失一起使用的损失函数。默认为 “smooth_l1”。
temperature (float, optional) – 反温度(beta)。对于较小的超参数值,目标的行为类似于行为克隆,而对于较大的值,它尝试恢复 Q 函数的最大值。
expectile (float, optional) – expectile \(\tau\)。较大的 \(\tau\) 值对于需要动态规划(“stichting”)的 antmaze 任务至关重要。
priority_key (str, optional) – [已弃用,请改用 .set_keys(priority_key=priority_key)] 用于写入优先级(用于优先回放缓冲区使用)的 tensordict 键。默认为 “td_error”。
separate_losses (bool, optional) – 如果
True
,策略和评论家之间的共享参数将仅在策略损失上进行训练。默认为False
,即梯度会传播到策略损失和评论家损失的共享参数。reduction (str, optional) – 指定应用于输出的 reduction:
"none"
|"mean"
|"sum"
。"none"
:不应用 reduction,"mean"
:输出的总和将除以输出中元素的数量,"sum"
:输出将被求和。默认值:"mean"
。
示例
>>> import torch >>> from torch import nn >>> from torchrl.data import Bounded >>> from torchrl.modules.distributions import NormalParamExtractor, TanhNormal >>> from torchrl.modules.tensordict_module.actors import ProbabilisticActor, ValueOperator >>> from torchrl.modules.tensordict_module.common import SafeModule >>> from torchrl.objectives.iql import IQLLoss >>> from tensordict import TensorDict >>> n_act, n_obs = 4, 3 >>> spec = Bounded(-torch.ones(n_act), torch.ones(n_act), (n_act,)) >>> net = nn.Sequential(nn.Linear(n_obs, 2 * n_act), NormalParamExtractor()) >>> module = SafeModule(net, in_keys=["observation"], out_keys=["loc", "scale"]) >>> actor = ProbabilisticActor( ... module=module, ... in_keys=["loc", "scale"], ... spec=spec, ... distribution_class=TanhNormal) >>> class QValueClass(nn.Module): ... def __init__(self): ... super().__init__() ... self.linear = nn.Linear(n_obs + n_act, 1) ... def forward(self, obs, act): ... return self.linear(torch.cat([obs, act], -1)) >>> qvalue = SafeModule( ... QValueClass(), ... in_keys=["observation", "action"], ... out_keys=["state_action_value"], ... ) >>> value = SafeModule( ... nn.Linear(n_obs, 1), ... in_keys=["observation"], ... out_keys=["state_value"], ... ) >>> loss = IQLLoss(actor, qvalue, value) >>> batch = [2, ] >>> action = spec.rand(batch) >>> data = TensorDict({ ... "observation": torch.randn(*batch, n_obs), ... "action": action, ... ("next", "done"): torch.zeros(*batch, 1, dtype=torch.bool), ... ("next", "terminated"): torch.zeros(*batch, 1, dtype=torch.bool), ... ("next", "reward"): torch.randn(*batch, 1), ... ("next", "observation"): torch.randn(*batch, n_obs), ... }, batch) >>> loss(data) TensorDict( fields={ entropy: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.float32, is_shared=False), loss_actor: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.float32, is_shared=False), loss_qvalue: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.float32, is_shared=False), loss_value: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.float32, is_shared=False)}, batch_size=torch.Size([]), device=None, is_shared=False)
此类也与非 tensordict 的模块兼容,并且可以在不使用任何与 tensordict 相关的原语的情况下使用。在这种情况下,预期的关键字参数是:
["action", "next_reward", "next_done", "next_terminated"]
+ actor、value 和 qvalue 网络的 in_keys 返回值是一个张量元组,顺序如下:["loss_actor", "loss_qvalue", "loss_value", "entropy"]
。示例
>>> import torch >>> from torch import nn >>> from torchrl.data import Bounded >>> from torchrl.modules.distributions import NormalParamExtractor, TanhNormal >>> from torchrl.modules.tensordict_module.actors import ProbabilisticActor, ValueOperator >>> from torchrl.modules.tensordict_module.common import SafeModule >>> from torchrl.objectives.iql import IQLLoss >>> _ = torch.manual_seed(42) >>> n_act, n_obs = 4, 3 >>> spec = Bounded(-torch.ones(n_act), torch.ones(n_act), (n_act,)) >>> net = nn.Sequential(nn.Linear(n_obs, 2 * n_act), NormalParamExtractor()) >>> module = SafeModule(net, in_keys=["observation"], out_keys=["loc", "scale"]) >>> actor = ProbabilisticActor( ... module=module, ... in_keys=["loc", "scale"], ... spec=spec, ... distribution_class=TanhNormal) >>> class QValueClass(nn.Module): ... def __init__(self): ... super().__init__() ... self.linear = nn.Linear(n_obs + n_act, 1) ... def forward(self, obs, act): ... return self.linear(torch.cat([obs, act], -1)) >>> qvalue = SafeModule( ... QValueClass(), ... in_keys=["observation", "action"], ... out_keys=["state_action_value"], ... ) >>> value = SafeModule( ... nn.Linear(n_obs, 1), ... in_keys=["observation"], ... out_keys=["state_value"], ... ) >>> loss = IQLLoss(actor, qvalue, value) >>> batch = [2, ] >>> action = spec.rand(batch) >>> loss_actor, loss_qvalue, loss_value, entropy = loss( ... observation=torch.randn(*batch, n_obs), ... action=action, ... next_done=torch.zeros(*batch, 1, dtype=torch.bool), ... next_terminated=torch.zeros(*batch, 1, dtype=torch.bool), ... next_observation=torch.zeros(*batch, n_obs), ... next_reward=torch.randn(*batch, 1)) >>> loss_actor.backward()
输出键也可以使用
IQLLoss.select_out_keys()
方法进行过滤。示例
>>> _ = loss.select_out_keys('loss_actor', 'loss_qvalue') >>> loss_actor, loss_qvalue = loss( ... observation=torch.randn(*batch, n_obs), ... action=action, ... next_done=torch.zeros(*batch, 1, dtype=torch.bool), ... next_terminated=torch.zeros(*batch, 1, dtype=torch.bool), ... next_observation=torch.zeros(*batch, n_obs), ... next_reward=torch.randn(*batch, 1)) >>> loss_actor.backward()
- forward(tensordict: TensorDictBase = None) TensorDictBase [source]¶
它旨在读取一个输入 TensorDict 并返回另一个名为“loss*”的损失键的 tensordict。
将损失拆分为其组成部分,然后训练器可以使用它来记录整个训练过程中的各种损失值。输出 tensordict 中存在的其他标量也将被记录。
- 参数:
tensordict – 一个输入 tensordict,其中包含计算损失所需的值。
- 返回:
一个新的 tensordict,没有批处理维度,包含各种损失标量,这些标量将被命名为“loss*”。损失必须以此名称返回,这一点至关重要,因为训练器将在反向传播之前读取它们。
- make_value_estimator(value_type: Optional[ValueEstimators] = None, **hyperparams)[source]¶
值函数构造器。
如果需要非默认值函数,则必须使用此方法构建。
- 参数:
value_type (ValueEstimators) – 一个
ValueEstimators
枚举类型,指示要使用的值函数。如果未提供,则将使用存储在default_value_estimator
属性中的默认值。结果值估计器类将在self.value_type
中注册,以便将来进行改进。**hyperparams – 用于值函数的超参数。如果未提供,则将使用
default_value_kwargs()
指示的值。
示例
>>> from torchrl.objectives import DQNLoss >>> # initialize the DQN loss >>> actor = torch.nn.Linear(3, 4) >>> dqn_loss = DQNLoss(actor, action_space="one-hot") >>> # updating the parameters of the default value estimator >>> dqn_loss.make_value_estimator(gamma=0.9) >>> dqn_loss.make_value_estimator( ... ValueEstimators.TD1, ... gamma=0.9) >>> # if we want to change the gamma value >>> dqn_loss.make_value_estimator(dqn_loss.value_type, gamma=0.9)