JumanjiWrapper¶
- torchrl.envs.JumanjiWrapper(*args, **kwargs)[源代码]¶
Jumanji 环境包装器。
Jumanji 提供了一个基于 Jax 的矢量化模拟框架。TorchRL 的包装器会产生一些 jax 到 torch 转换的开销,但仍然可以在模拟轨迹之上构建计算图,从而允许通过展开进行反向传播。
GitHub:https://github.com/instadeepai/jumanji
文档:https://instadeepai.github.io/jumanji/
论文:https://arxiv.org/abs/2306.09884
- 参数:
env (jumanji.env.Environment) – 要包装的环境。
categorical_action_encoding (bool, 可选) – 如果
True
,则分类规范将转换为 TorchRL 等效项(torchrl.data.DiscreteTensorSpec
),否则将使用独热编码(torchrl.data.OneHotTensorSpec
)。默认为False
。
- 关键字参数:
from_pixels (bool, 可选) – 环境是否应该呈现其输出。这将极大地影响环境吞吐量。仅呈现第一个环境。有关更多信息,请参阅
render()
。默认为False。frame_skip (int, 可选) – 如果提供,则指示要重复相同操作的步数。返回的观察值将是序列的最后一个观察值,而奖励将是跨步奖励的总和。
device (torch.device, 可选) – 如果提供,则数据要转换为其上的设备。默认为
torch.device("cpu")
。batch_size (torch.Size, 可选) – 环境的批次大小。使用
jumanji
时,这表示矢量化环境的数量。默认为torch.Size([])
。allow_done_after_reset (bool, 可选) – 如果
True
,则允许 env 在调用reset()
后立即为done
。默认为False
。
- 变量:
available_envs – 可用于构建的环境
示例:.. rubric:: 示例
>>> import jumanji >>> from torchrl.envs import JumanjiWrapper >>> base_env = jumanji.make("Snake-v1") >>> env = JumanjiWrapper(base_env) >>> env.set_seed(0) >>> td = env.reset() >>> td["action"] = env.action_spec.rand() >>> td = env.step(td) >>> print(td) TensorDict( fields={ action: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), done: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), grid: Tensor(shape=torch.Size([12, 12, 5]), device=cpu, dtype=torch.float32, is_shared=False), next: TensorDict( fields={ action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), done: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False), grid: Tensor(shape=torch.Size([12, 12, 5]), device=cpu, dtype=torch.float32, is_shared=False), reward: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.float32, is_shared=False), state: TensorDict( fields={ action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), body: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False), body_state: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.int32, is_shared=False), fruit_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), head_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), key: Tensor(shape=torch.Size([2]), device=cpu, dtype=torch.int32, is_shared=False), length: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), tail: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), terminated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), state: TensorDict( fields={ action_mask: Tensor(shape=torch.Size([4]), device=cpu, dtype=torch.bool, is_shared=False), body: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False), body_state: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.int32, is_shared=False), fruit_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), head_position: TensorDict( fields={ col: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), row: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), key: Tensor(shape=torch.Size([2]), device=cpu, dtype=torch.int32, is_shared=False), length: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), tail: Tensor(shape=torch.Size([12, 12]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False), step_count: Tensor(shape=torch.Size([]), device=cpu, dtype=torch.int32, is_shared=False), terminated: Tensor(shape=torch.Size([1]), device=cpu, dtype=torch.bool, is_shared=False)}, batch_size=torch.Size([]), device=cpu, is_shared=False) >>> print(env.available_envs) ['Game2048-v1', 'Maze-v0', 'Cleaner-v0', 'CVRP-v1', 'MultiCVRP-v0', 'Minesweeper-v0', 'RubiksCube-v0', 'Knapsack-v1', 'Sudoku-v0', 'Snake-v1', 'TSP-v1', 'Connector-v2', 'MMST-v0', 'GraphColoring-v0', 'RubiksCube-partly-scrambled-v0', 'RobotWarehouse-v0', 'Tetris-v0', 'BinPack-v2', 'Sudoku-very-easy-v0', 'JobShop-v0']
要利用 Jumanji,通常会同时执行多个环境。
>>> import jumanji >>> from torchrl.envs import JumanjiWrapper >>> base_env = jumanji.make("Snake-v1") >>> env = JumanjiWrapper(base_env, batch_size=[10]) >>> env.set_seed(0) >>> td = env.reset() >>> td["action"] = env.action_spec.rand() >>> td = env.step(td)
在以下示例中,我们迭代测试不同的批次大小并报告短时间展开的执行时间
示例
>>> from torch.utils.benchmark import Timer >>> for batch_size in [4, 16, 128]: ... timer = Timer( ... ''' ... env.rollout(100) ... ''', ... setup=f''' ... from torchrl.envs import JumanjiWrapper ... import jumanji ... env = JumanjiWrapper(jumanji.make('Snake-v1'), batch_size=[{batch_size}]) ... env.set_seed(0) ... env.rollout(2) ... ''') ... print(batch_size, timer.timeit(number=10)) 4 env.rollout(100) setup: [...] Median: 122.40 ms 2 measurements, 1 runs per measurement, 1 thread
16 env.rollout(100) 设置:[…] 中位数:134.39 毫秒 2 次测量,每次测量 1 次运行,1 个线程
128 env.rollout(100) 设置:[…] 中位数:172.31 毫秒 2 次测量,每次测量 1 次运行,1 个线程