Skip to content

Commit 8183d0f

Browse files
co63oca-r-r-o-wgithub-actions[bot]
authored
Fix typos in strings and comments (#11476)
* Fix typos in strings and comments Signed-off-by: co63oc <co63oc@users.noreply.github.com> * Update src/diffusers/hooks/hooks.py Co-authored-by: Aryan <contact.aryanvs@gmail.com> * Update src/diffusers/hooks/hooks.py Co-authored-by: Aryan <contact.aryanvs@gmail.com> * Update layerwise_casting.py * Apply style fixes * update --------- Signed-off-by: co63oc <co63oc@users.noreply.github.com> Co-authored-by: Aryan <contact.aryanvs@gmail.com> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
1 parent 6508da6 commit 8183d0f

24 files changed

+34
-34
lines changed

examples/cogvideo/train_cogvideox_image_to_video_lora.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -555,7 +555,7 @@ def _load_dataset_from_local_path(self):
555555

556556
if any(not path.is_file() for path in instance_videos):
557557
raise ValueError(
558-
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found atleast one path that is not a valid file."
558+
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found at least one path that is not a valid file."
559559
)
560560

561561
return instance_prompts, instance_videos

examples/cogvideo/train_cogvideox_lora.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -539,7 +539,7 @@ def _load_dataset_from_local_path(self):
539539

540540
if any(not path.is_file() for path in instance_videos):
541541
raise ValueError(
542-
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found atleast one path that is not a valid file."
542+
"Expected '--video_column' to be a path to a file in `--instance_data_root` containing line-separated paths to video data but found at least one path that is not a valid file."
543543
)
544544

545545
return instance_prompts, instance_videos

examples/research_projects/multi_subject_dreambooth_inpainting/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ accelerate launch train_multi_subject_dreambooth_inpaint.py \
7373

7474
## 3. Results
7575

76-
A [![Weights & Biases](https://img.shields.io/badge/Weights%20&%20Biases-Report-blue)](https://wandb.ai/gzguevara/uncategorized/reports/Multi-Subject-Dreambooth-for-Inpainting--Vmlldzo2MzY5NDQ4?accessToken=y0nya2d7baguhbryxaikbfr1203amvn1jsmyl07vk122mrs7tnph037u1nqgse8t) is provided showing the training progress by every 50 steps. Note, the reported weights & baises run was performed on a A100 GPU with the following stetting:
76+
A [![Weights & Biases](https://img.shields.io/badge/Weights%20&%20Biases-Report-blue)](https://wandb.ai/gzguevara/uncategorized/reports/Multi-Subject-Dreambooth-for-Inpainting--Vmlldzo2MzY5NDQ4?accessToken=y0nya2d7baguhbryxaikbfr1203amvn1jsmyl07vk122mrs7tnph037u1nqgse8t) is provided showing the training progress by every 50 steps. Note, the reported weights & biases run was performed on a A100 GPU with the following stetting:
7777

7878
```bash
7979
accelerate launch train_multi_subject_dreambooth_inpaint.py \

src/diffusers/hooks/faster_cache.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -146,7 +146,7 @@ class FasterCacheConfig:
146146
alpha_low_frequency: float = 1.1
147147
alpha_high_frequency: float = 1.1
148148

149-
# n as described in CFG-Cache explanation in the paper - dependant on the model
149+
# n as described in CFG-Cache explanation in the paper - dependent on the model
150150
unconditional_batch_skip_range: int = 5
151151
unconditional_batch_timestep_skip_range: Tuple[int, int] = (-1, 641)
152152

src/diffusers/hooks/hooks.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ def initialize_hook(self, module: torch.nn.Module) -> torch.nn.Module:
4545

4646
def deinitalize_hook(self, module: torch.nn.Module) -> torch.nn.Module:
4747
r"""
48-
Hook that is executed when a model is deinitalized.
48+
Hook that is executed when a model is deinitialized.
4949
5050
Args:
5151
module (`torch.nn.Module`):

src/diffusers/hooks/layerwise_casting.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ def initialize_hook(self, module: torch.nn.Module):
6262

6363
def deinitalize_hook(self, module: torch.nn.Module):
6464
raise NotImplementedError(
65-
"LayerwiseCastingHook does not support deinitalization. A model once enabled with layerwise casting will "
65+
"LayerwiseCastingHook does not support deinitialization. A model once enabled with layerwise casting will "
6666
"have casted its weights to a lower precision dtype for storage. Casting this back to the original dtype "
6767
"will lead to precision loss, which might have an impact on the model's generation quality. The model should "
6868
"be re-initialized and loaded in the original dtype."

src/diffusers/loaders/peft.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -251,7 +251,7 @@ def load_lora_adapter(
251251

252252
rank = {}
253253
for key, val in state_dict.items():
254-
# Cannot figure out rank from lora layers that don't have atleast 2 dimensions.
254+
# Cannot figure out rank from lora layers that don't have at least 2 dimensions.
255255
# Bias layers in LoRA only have a single dimension
256256
if "lora_B" in key and val.ndim > 1:
257257
# Check out https://github.com/huggingface/peft/pull/2419 for the `^` symbol.

src/diffusers/models/autoencoders/autoencoder_kl.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -63,8 +63,8 @@ class AutoencoderKL(ModelMixin, ConfigMixin, FromOriginalModelMixin, PeftAdapter
6363
Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
6464
force_upcast (`bool`, *optional*, default to `True`):
6565
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
66-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
67-
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
66+
can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
67+
can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
6868
mid_block_add_attention (`bool`, *optional*, default to `True`):
6969
If enabled, the mid_block of the Encoder and Decoder will have attention blocks. If set to false, the
7070
mid_block will only have resnet blocks

src/diffusers/models/autoencoders/autoencoder_kl_allegro.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -715,8 +715,8 @@ class AutoencoderKLAllegro(ModelMixin, ConfigMixin):
715715
Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
716716
force_upcast (`bool`, default to `True`):
717717
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
718-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
719-
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
718+
can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
719+
can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
720720
"""
721721

722722
_supports_gradient_checkpointing = True

src/diffusers/models/autoencoders/autoencoder_kl_cogvideox.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -983,8 +983,8 @@ class AutoencoderKLCogVideoX(ModelMixin, ConfigMixin, FromOriginalModelMixin):
983983
Synthesis with Latent Diffusion Models](https://huggingface.co/papers/2112.10752) paper.
984984
force_upcast (`bool`, *optional*, default to `True`):
985985
If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
986-
can be fine-tuned / trained to a lower range without loosing too much precision in which case
987-
`force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
986+
can be fine-tuned / trained to a lower range without losing too much precision in which case `force_upcast`
987+
can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
988988
"""
989989

990990
_supports_gradient_checkpointing = True

0 commit comments

Comments
 (0)