Skip to content

Commit 4842f5d

Browse files
authored
chore: remove redundant words (#10609)
Signed-off-by: sunxunle <sunxunle@ampere.tech>
1 parent 328e0d2 commit 4842f5d

File tree

5 files changed

+5
-5
lines changed

5 files changed

+5
-5
lines changed

docs/source/en/api/pipelines/mochi.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ export_to_video(frames, "mochi.mp4", fps=30)
115115

116116
## Reproducing the results from the Genmo Mochi repo
117117

118-
The [Genmo Mochi implementation](https://github.com/genmoai/mochi/tree/main) uses different precision values for each stage in the inference process. The text encoder and VAE use `torch.float32`, while the DiT uses `torch.bfloat16` with the [attention kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html#torch.nn.attention.sdpa_kernel) set to `EFFICIENT_ATTENTION`. Diffusers pipelines currently do not support setting different `dtypes` for different stages of the pipeline. In order to run inference in the same way as the the original implementation, please refer to the following example.
118+
The [Genmo Mochi implementation](https://github.com/genmoai/mochi/tree/main) uses different precision values for each stage in the inference process. The text encoder and VAE use `torch.float32`, while the DiT uses `torch.bfloat16` with the [attention kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html#torch.nn.attention.sdpa_kernel) set to `EFFICIENT_ATTENTION`. Diffusers pipelines currently do not support setting different `dtypes` for different stages of the pipeline. In order to run inference in the same way as the original implementation, please refer to the following example.
119119

120120
<Tip>
121121
The original Mochi implementation zeros out empty prompts. However, enabling this option and placing the entire pipeline under autocast can lead to numerical overflows with the T5 text encoder.

scripts/convert_consistency_decoder.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ def _download(url: str, root: str):
7373
loop.update(len(buffer))
7474

7575
if insecure_hashlib.sha256(open(download_target, "rb").read()).hexdigest() != expected_sha256:
76-
raise RuntimeError("Model has been downloaded but the SHA256 checksum does not not match")
76+
raise RuntimeError("Model has been downloaded but the SHA256 checksum does not match")
7777

7878
return download_target
7979

src/diffusers/optimization.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ def get_polynomial_decay_schedule_with_warmup(
258258

259259
lr_init = optimizer.defaults["lr"]
260260
if not (lr_init > lr_end):
261-
raise ValueError(f"lr_end ({lr_end}) must be be smaller than initial lr ({lr_init})")
261+
raise ValueError(f"lr_end ({lr_end}) must be smaller than initial lr ({lr_init})")
262262

263263
def lr_lambda(current_step: int):
264264
if current_step < num_warmup_steps:

src/diffusers/pipelines/pag/pag_utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ def set_pag_applied_layers(
158158
),
159159
):
160160
r"""
161-
Set the the self-attention layers to apply PAG. Raise ValueError if the input is invalid.
161+
Set the self-attention layers to apply PAG. Raise ValueError if the input is invalid.
162162
163163
Args:
164164
pag_applied_layers (`str` or `List[str]`):

src/diffusers/video_processor.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,7 +67,7 @@ def preprocess_video(self, video, height: Optional[int] = None, width: Optional[
6767

6868
# ensure the input is a list of videos:
6969
# - if it is a batch of videos (5d torch.Tensor or np.ndarray), it is converted to a list of videos (a list of 4d torch.Tensor or np.ndarray)
70-
# - if it is is a single video, it is convereted to a list of one video.
70+
# - if it is a single video, it is convereted to a list of one video.
7171
if isinstance(video, (np.ndarray, torch.Tensor)) and video.ndim == 5:
7272
video = list(video)
7373
elif isinstance(video, list) and is_valid_image(video[0]) or is_valid_image_imagelist(video):

0 commit comments

Comments
 (0)