Skip to content

Commit f672c70

Browse files
[Doc] Fix typos (#2682)
Co-authored-by: Vincent Moens <vincentmoens@gmail.com>
1 parent d009835 commit f672c70

File tree

6 files changed

+20
-18
lines changed

6 files changed

+20
-18
lines changed

docs/source/reference/envs.rst

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,7 @@ The ``"_reset"`` key has two distinct functionalities:
218218
modification will be lost. After this masking operation, the ``"_reset"``
219219
entries will be erased from the :meth:`~.EnvBase.reset` outputs.
220220

221-
It must be pointed that ``"_reset"`` is a private key, and it should only be
221+
It must be pointed out that ``"_reset"`` is a private key, and it should only be
222222
used when coding specific environment features that are internal facing.
223223
In other words, this should NOT be used outside of the library, and developers
224224
will keep the right to modify the logic of partial resets through ``"_reset"``
@@ -243,7 +243,7 @@ designing reset functionalities:
243243
``any`` or ``all`` logic depending on the task).
244244
- When calling :meth:`env.reset(tensordict)` with a partial ``"_reset"`` entry
245245
that will reset some but not all the done sub-environments, the input data
246-
should contain the data of the sub-environemtns that are __not__ being reset.
246+
should contain the data of the sub-environments that are __not__ being reset.
247247
The reason for this constrain lies in the fact that the output of the
248248
``env._reset(data)`` can only be predicted for the entries that are reset.
249249
For the others, TorchRL cannot know in advance if they will be meaningful or
@@ -267,7 +267,7 @@ have on an environment returning zeros after reset:
267267
>>> env.reset(data)
268268
>>> print(data.get(("agent0", "val"))) # only the second value is 0
269269
tensor([1, 0])
270-
>>> print(data.get(("agent1", "val"))) # only the second value is 0
270+
>>> print(data.get(("agent1", "val"))) # only the first value is 0
271271
tensor([0, 2])
272272
>>> # nested resets are overridden by a "_reset" at the root
273273
>>> data = TensorDict({
@@ -573,7 +573,7 @@ Dynamic Specs
573573
.. _dynamic_envs:
574574

575575
Running environments in parallel is usually done via the creation of memory buffers used to pass information from one
576-
process to another. In some cases, it may be impossible to forecast whether and environment will or will not have
576+
process to another. In some cases, it may be impossible to forecast whether an environment will or will not have
577577
consistent inputs or outputs during a rollout, as their shape may be variable. We refer to this as dynamic specs.
578578

579579
TorchRL is capable of handling dynamic specs, but the batched environments and collectors will need to be made
@@ -670,9 +670,12 @@ Here is a working example:
670670
is_shared=False,
671671
stack_dim=0)
672672

673-
.. warning:: The absence of memory buffers in :class:`~torchrl.envs.ParallelEnv` and in data collectors can impact
674-
performance of these classes dramatically. Any such usage should be carefully benchmarked against a plain execution on
675-
a single process, as serializing and deserializing large numbers of tensors can be very expensive.
673+
.. warning::
674+
The absence of memory buffers in :class:`~torchrl.envs.ParallelEnv` and in
675+
data collectors can impact performance of these classes dramatically. Any
676+
such usage should be carefully benchmarked against a plain execution on a
677+
single process, as serializing and deserializing large numbers of tensors
678+
can be very expensive.
676679

677680
Currently, :func:`~torchrl.envs.utils.check_env_specs` will pass for dynamic specs where a shape varies along some
678681
dimensions, but not when a key is present during a step and absent during others, or when the number of dimensions
@@ -941,7 +944,7 @@ formatted images (WHC or CWH).
941944
>>> env.transform.dump() # Save the video and clear cache
942945

943946
Note that the cache of the transform will keep on growing until dump is called. It is the user responsibility to
944-
take care of calling dumpy when needed to avoid OOM issues.
947+
take care of calling `dump` when needed to avoid OOM issues.
945948

946949
In some cases, creating a testing environment where images can be collected is tedious or expensive, or simply impossible
947950
(some libraries only allow one environment instance per workspace).

torchrl/envs/transforms/transforms.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3533,7 +3533,7 @@ class DTypeCastTransform(Transform):
35333533
>>> print(td.get("not_transformed").dtype)
35343534
torch.float32
35353535
3536-
The same behavior is the rule when environments are constructedw without
3536+
The same behavior is the rule when environments are constructed without
35373537
specifying the transform keys:
35383538
35393539
Examples:
@@ -3903,7 +3903,7 @@ class DoubleToFloat(DTypeCastTransform):
39033903
>>> print(td.get("not_transformed").dtype)
39043904
torch.float32
39053905
3906-
The same behavior is the rule when environments are constructedw without
3906+
The same behavior is the rule when environments are constructed without
39073907
specifying the transform keys:
39083908
39093909
Examples:

torchrl/modules/tensordict_module/probabilistic.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -213,8 +213,8 @@ class SafeProbabilisticTensorDictSequential(
213213
instances, terminating in ProbabilisticTensorDictModule, to be run
214214
sequentially.
215215
partial_tolerant (bool, optional): if ``True``, the input tensordict can miss some
216-
of the input keys. If so, the only module that will be executed are those
217-
who can be executed given the keys that are present. Also, if the input
216+
of the input keys. If so, the only modules that will be executed are those
217+
which can be executed given the keys that are present. Also, if the input
218218
tensordict is a lazy stack of tensordicts AND if partial_tolerant is
219219
``True`` AND if the stack does not have the required keys, then
220220
TensorDictSequential will scan through the sub-tensordicts looking for those

torchrl/modules/tensordict_module/sequence.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ class SafeSequential(TensorDictSequential, SafeModule):
2323
Args:
2424
modules (iterable of TensorDictModules): ordered sequence of TensorDictModule instances to be run sequentially.
2525
partial_tolerant (bool, optional): if ``True``, the input tensordict can miss some of the input keys.
26-
If so, the only module that will be executed are those who can be executed given the keys that
26+
If so, the only modules that will be executed are those which can be executed given the keys that
2727
are present.
2828
Also, if the input tensordict is a lazy stack of tensordicts AND if partial_tolerant is ``True`` AND if the
2929
stack does not have the required keys, then SafeSequential will scan through the sub-tensordicts

tutorials/sphinx-tutorials/getting-started-0.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -106,7 +106,7 @@
106106
print(reset_with_action["action"])
107107

108108
################################
109-
# We now need to pass this action tp the environment.
109+
# We now need to pass this action to the environment.
110110
# We'll be passing the entire tensordict to the ``step`` method, since there
111111
# might be more than one tensor to be read in more advanced cases like
112112
# Multi-Agent RL or stateless environments:

tutorials/sphinx-tutorials/multi_task.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,6 @@
1111
# sphinx_gallery_start_ignore
1212
import warnings
1313

14-
from tensordict import LazyStackedTensorDict
15-
1614
warnings.filterwarnings("ignore")
1715

1816
from torch import multiprocessing
@@ -32,6 +30,7 @@
3230

3331
# sphinx_gallery_end_ignore
3432

33+
from tensordict import LazyStackedTensorDict
3534
from tensordict.nn import TensorDictModule, TensorDictSequential
3635
from torch import nn
3736

@@ -91,7 +90,7 @@
9190
# ^^^^^^
9291
#
9392
# We will design a policy where a backbone reads the "observation" key.
94-
# Then specific sub-components will ready the "observation_stand" and
93+
# Then specific sub-components will read the "observation_stand" and
9594
# "observation_walk" keys of the stacked tensordicts, if they are present,
9695
# and pass them through the dedicated sub-network.
9796

@@ -138,7 +137,7 @@
138137
# Executing diverse tasks in parallel
139138
# ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
140139
#
141-
# We can parallelize the operations if the common keys-value pairs share the
140+
# We can parallelize the operations if the common key-value pairs share the
142141
# same specs (in particular their shape and dtype must match: you can't do the
143142
# following if the observation shapes are different but are pointed to by the
144143
# same key).

0 commit comments

Comments
 (0)