@@ -218,7 +218,7 @@ The ``"_reset"`` key has two distinct functionalities:
218
218
modification will be lost. After this masking operation, the ``"_reset" ``
219
219
entries will be erased from the :meth: `~.EnvBase.reset ` outputs.
220
220
221
- It must be pointed that ``"_reset" `` is a private key, and it should only be
221
+ It must be pointed out that ``"_reset" `` is a private key, and it should only be
222
222
used when coding specific environment features that are internal facing.
223
223
In other words, this should NOT be used outside of the library, and developers
224
224
will keep the right to modify the logic of partial resets through ``"_reset" ``
@@ -243,7 +243,7 @@ designing reset functionalities:
243
243
``any `` or ``all `` logic depending on the task).
244
244
- When calling :meth: `env.reset(tensordict) ` with a partial ``"_reset" `` entry
245
245
that will reset some but not all the done sub-environments, the input data
246
- should contain the data of the sub-environemtns that are __not__ being reset.
246
+ should contain the data of the sub-environments that are __not__ being reset.
247
247
The reason for this constrain lies in the fact that the output of the
248
248
``env._reset(data) `` can only be predicted for the entries that are reset.
249
249
For the others, TorchRL cannot know in advance if they will be meaningful or
@@ -267,7 +267,7 @@ have on an environment returning zeros after reset:
267
267
>>> env.reset(data)
268
268
>>> print (data.get((" agent0" , " val" ))) # only the second value is 0
269
269
tensor([1, 0])
270
- >>> print (data.get((" agent1" , " val" ))) # only the second value is 0
270
+ >>> print (data.get((" agent1" , " val" ))) # only the first value is 0
271
271
tensor([0, 2])
272
272
>>> # nested resets are overridden by a "_reset" at the root
273
273
>>> data = TensorDict({
@@ -573,7 +573,7 @@ Dynamic Specs
573
573
.. _dynamic_envs :
574
574
575
575
Running environments in parallel is usually done via the creation of memory buffers used to pass information from one
576
- process to another. In some cases, it may be impossible to forecast whether and environment will or will not have
576
+ process to another. In some cases, it may be impossible to forecast whether an environment will or will not have
577
577
consistent inputs or outputs during a rollout, as their shape may be variable. We refer to this as dynamic specs.
578
578
579
579
TorchRL is capable of handling dynamic specs, but the batched environments and collectors will need to be made
@@ -670,9 +670,12 @@ Here is a working example:
670
670
is_shared=False,
671
671
stack_dim=0)
672
672
673
- .. warning :: The absence of memory buffers in :class:`~torchrl.envs.ParallelEnv` and in data collectors can impact
674
- performance of these classes dramatically. Any such usage should be carefully benchmarked against a plain execution on
675
- a single process, as serializing and deserializing large numbers of tensors can be very expensive.
673
+ .. warning ::
674
+ The absence of memory buffers in :class: `~torchrl.envs.ParallelEnv ` and in
675
+ data collectors can impact performance of these classes dramatically. Any
676
+ such usage should be carefully benchmarked against a plain execution on a
677
+ single process, as serializing and deserializing large numbers of tensors
678
+ can be very expensive.
676
679
677
680
Currently, :func: `~torchrl.envs.utils.check_env_specs ` will pass for dynamic specs where a shape varies along some
678
681
dimensions, but not when a key is present during a step and absent during others, or when the number of dimensions
@@ -941,7 +944,7 @@ formatted images (WHC or CWH).
941
944
>>> env.transform.dump() # Save the video and clear cache
942
945
943
946
Note that the cache of the transform will keep on growing until dump is called. It is the user responsibility to
944
- take care of calling dumpy when needed to avoid OOM issues.
947
+ take care of calling ` dump ` when needed to avoid OOM issues.
945
948
946
949
In some cases, creating a testing environment where images can be collected is tedious or expensive, or simply impossible
947
950
(some libraries only allow one environment instance per workspace).
0 commit comments