DMControlEnv¶
- torchrl.envs.DMControlEnv(*args, **kwargs)[源代码]¶
DeepMind Control 实验室环境包装器。
DeepMind 控制库可以在这里找到: https://github.com/deepmind/dm_control。
论文: https://arxiv.org/abs/2006.12983
- 参数:
env_name (str) – 环境名称。
task_name (str) – 任务名称。
- 关键字参数:
from_pixels (bool, 可选) – 如果
True
,则尝试从环境中返回像素观测值。默认情况下,这些观测值将写入"pixels"
条目下。默认为False
。pixels_only (bool, 可选) – 如果
True
,则仅返回像素观测值(默认情况下在输出 tensordict 中的"pixels"
条目下)。如果False
,则只要from_pixels=True
,就会返回观测值(例如,状态)和像素。默认为True
。frame_skip (int, 可选) – 如果提供,则指示要重复相同动作的步数。返回的观测值将是序列的最后一个观测值,而奖励将是跨步数的奖励总和。
device (torch.device, 可选) – 如果提供,则数据要转换到的设备。默认为
torch.device("cpu")
。batch_size (torch.Size, 可选) – 环境的批大小。应与所有观测值、完成状态、奖励、动作和信息的领先维度匹配。默认为
torch.Size([])
。allow_done_after_reset (bool, 可选) – 如果
True
,则允许环境在调用reset()
后立即处于done
状态。默认为False
。
- 变量:
available_envs (list) – 一个
Tuple[str, List[str]]
列表,表示可用的环境/任务对。
示例
>>> from torchrl.envs import DMControlEnv >>> env = DMControlEnv(env_name="cheetah", task_name="run", ... from_pixels=True, frame_skip=4) >>> td = env.rand_step() >>> print(td) TensorDict( fields={ action: Tensor(shape=torch.Size([6]), device=cpu, dtype=torch.float64, is_shared=False), next: TensorDict( fields={ done: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), pixels: Tensor(shape=torch.Size([240, 320, 3]), device=cpu, dtype=torch.uint8, is_shared=False), position: Tensor(shape=torch.Size([8]), device=cpu, dtype=torch.float64, is_shared=False), reward: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.float64, is_shared=False), terminated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), truncated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), velocity: Tensor(shape=torch.Size([9]), device=cpu, dtype=torch.float64, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False) >>> print(env.available_envs) [('acrobot', ['swingup', 'swingup_sparse']), ('ball_in_cup', ['catch']), ('cartpole', ['balance', 'balance_sparse', 'swingup', 'swingup_sparse', 'three_poles', 'two_poles']), ('cheetah', ['run']), ('finger', ['spin', 'turn_easy', 'turn_hard']), ('fish', ['upright', 'swim']), ('hopper', ['stand', 'hop']), ('humanoid', ['stand', 'walk', 'run', 'run_pure_state']), ('manipulator', ['bring_ball', 'bring_peg', 'insert_ball', 'insert_peg']), ('pendulum', ['swingup']), ('point_mass', ['easy', 'hard']), ('reacher', ['easy', 'hard']), ('swimmer', ['swimmer6', 'swimmer15']), ('walker', ['stand', 'walk', 'run']), ('dog', ['fetch', 'run', 'stand', 'trot', 'walk']), ('humanoid_CMU', ['run', 'stand', 'walk']), ('lqr', ['lqr_2_1', 'lqr_6_2']), ('quadruped', ['escape', 'fetch', 'run', 'walk']), ('stacker', ['stack_2', 'stack_4'])]