Skip to content

Commit 4179aa3

Browse files
authored
Merge branch 'main' into memory-optims
2 parents 7594fe0 + 86294d3 commit 4179aa3

File tree

121 files changed

+281
-192
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

121 files changed

+281
-192
lines changed

docs/source/en/api/pipelines/animatediff.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -966,7 +966,7 @@ pipe.to("cuda")
966966
prompt = {
967967
0: "A caterpillar on a leaf, high quality, photorealistic",
968968
40: "A caterpillar transforming into a cocoon, on a leaf, near flowers, photorealistic",
969-
80: "A cocoon on a leaf, flowers in the backgrond, photorealistic",
969+
80: "A cocoon on a leaf, flowers in the background, photorealistic",
970970
120: "A cocoon maturing and a butterfly being born, flowers and leaves visible in the background, photorealistic",
971971
160: "A beautiful butterfly, vibrant colors, sitting on a leaf, flowers in the background, photorealistic",
972972
200: "A beautiful butterfly, flying away in a forest, photorealistic",

docs/source/en/api/pipelines/ledits_pp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ You can find additional information about LEDITS++ on the [project page](https:/
2929
</Tip>
3030

3131
<Tip warning={true}>
32-
Due to some backward compatability issues with the current diffusers implementation of [`~schedulers.DPMSolverMultistepScheduler`] this implementation of LEdits++ can no longer guarantee perfect inversion.
32+
Due to some backward compatibility issues with the current diffusers implementation of [`~schedulers.DPMSolverMultistepScheduler`] this implementation of LEdits++ can no longer guarantee perfect inversion.
3333
This issue is unlikely to have any noticeable effects on applied use-cases. However, we provide an alternative implementation that guarantees perfect inversion in a dedicated [GitHub repo](https://github.com/ml-research/ledits_pp).
3434
</Tip>
3535

docs/source/en/api/pipelines/wan.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ pipe = WanImageToVideoPipeline.from_pretrained(
285285
image_encoder=image_encoder,
286286
torch_dtype=torch.bfloat16
287287
)
288-
# Since we've offloaded the larger models alrady, we can move the rest of the model components to GPU
288+
# Since we've offloaded the larger models already, we can move the rest of the model components to GPU
289289
pipe.to("cuda")
290290

291291
image = load_image(
@@ -368,7 +368,7 @@ pipe = WanImageToVideoPipeline.from_pretrained(
368368
image_encoder=image_encoder,
369369
torch_dtype=torch.bfloat16
370370
)
371-
# Since we've offloaded the larger models alrady, we can move the rest of the model components to GPU
371+
# Since we've offloaded the larger models already, we can move the rest of the model components to GPU
372372
pipe.to("cuda")
373373

374374
image = load_image(

docs/source/en/using-diffusers/inference_with_lcm.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -485,7 +485,7 @@ image = image[:, :, None]
485485
image = np.concatenate([image, image, image], axis=2)
486486
canny_image = Image.fromarray(image).resize((1024, 1216))
487487

488-
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda")
488+
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16").to("cuda")
489489

490490
unet = UNet2DConditionModel.from_pretrained(
491491
"latent-consistency/lcm-sdxl",
@@ -551,7 +551,7 @@ image = image[:, :, None]
551551
image = np.concatenate([image, image, image], axis=2)
552552
canny_image = Image.fromarray(image).resize((1024, 1024))
553553

554-
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda")
554+
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16").to("cuda")
555555

556556
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
557557
"stabilityai/stable-diffusion-xl-base-1.0",

docs/source/en/using-diffusers/pag.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -154,11 +154,11 @@ pipeline = AutoPipelineForInpainting.from_pretrained(
154154
pipeline.enable_model_cpu_offload()
155155
```
156156

157-
You can enable PAG on an exisiting inpainting pipeline like this
157+
You can enable PAG on an existing inpainting pipeline like this
158158

159159
```py
160-
pipeline_inpaint = AutoPipelineForInpaiting.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
161-
pipeline = AutoPipelineForInpaiting.from_pipe(pipeline_inpaint, enable_pag=True)
160+
pipeline_inpaint = AutoPipelineForInpainting.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
161+
pipeline = AutoPipelineForInpainting.from_pipe(pipeline_inpaint, enable_pag=True)
162162
```
163163

164164
This still works when your pipeline has a different task:

examples/advanced_diffusion_training/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ Now we'll simply specify the name of the dataset and caption column (in this cas
125125
```
126126

127127
You can also load a dataset straight from by specifying it's name in `dataset_name`.
128-
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loadin your own caption dataset.
128+
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loading your own caption dataset.
129129

130130
- **optimizer**: for this example, we'll use [prodigy](https://huggingface.co/blog/sdxl_lora_advanced_script#adaptive-optimizers) - an adaptive optimizer
131131
- **pivotal tuning**
@@ -404,7 +404,7 @@ The advanced script now supports custom choice of U-net blocks to train during D
404404
> In light of this, we're introducing a new feature to the advanced script to allow for configurable U-net learned blocks.
405405
406406
**Usage**
407-
Configure LoRA learned U-net blocks adding a `lora_unet_blocks` flag, with a comma seperated string specifying the targeted blocks.
407+
Configure LoRA learned U-net blocks adding a `lora_unet_blocks` flag, with a comma separated string specifying the targeted blocks.
408408
e.g:
409409
```bash
410410
--lora_unet_blocks="unet.up_blocks.0.attentions.0,unet.up_blocks.0.attentions.1"

examples/advanced_diffusion_training/README_flux.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ Now we'll simply specify the name of the dataset and caption column (in this cas
141141
```
142142

143143
You can also load a dataset straight from by specifying it's name in `dataset_name`.
144-
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loadin your own caption dataset.
144+
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loading your own caption dataset.
145145

146146
- **optimizer**: for this example, we'll use [prodigy](https://huggingface.co/blog/sdxl_lora_advanced_script#adaptive-optimizers) - an adaptive optimizer
147147
- **pivotal tuning**

examples/amused/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
## Amused training
22

3-
Amused can be finetuned on simple datasets relatively cheaply and quickly. Using 8bit optimizers, lora, and gradient accumulation, amused can be finetuned with as little as 5.5 GB. Here are a set of examples for finetuning amused on some relatively simple datasets. These training recipies are aggressively oriented towards minimal resources and fast verification -- i.e. the batch sizes are quite low and the learning rates are quite high. For optimal quality, you will probably want to increase the batch sizes and decrease learning rates.
3+
Amused can be finetuned on simple datasets relatively cheaply and quickly. Using 8bit optimizers, lora, and gradient accumulation, amused can be finetuned with as little as 5.5 GB. Here are a set of examples for finetuning amused on some relatively simple datasets. These training recipes are aggressively oriented towards minimal resources and fast verification -- i.e. the batch sizes are quite low and the learning rates are quite high. For optimal quality, you will probably want to increase the batch sizes and decrease learning rates.
44

55
All training examples use fp16 mixed precision and gradient checkpointing. We don't show 8 bit adam + lora as its about the same memory use as just using lora (bitsandbytes uses full precision optimizer states for weights below a minimum size).
66

examples/cogvideo/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ Note that setting the `<ID_TOKEN>` is not necessary. From some limited experimen
201201
> - The original repository uses a `lora_alpha` of `1`. We found this not suitable in many runs, possibly due to difference in modeling backends and training settings. Our recommendation is to set to the `lora_alpha` to either `rank` or `rank // 2`.
202202
> - If you're training on data whose captions generate bad results with the original model, a `rank` of 64 and above is good and also the recommendation by the team behind CogVideoX. If the generations are already moderately good on your training captions, a `rank` of 16/32 should work. We found that setting the rank too low, say `4`, is not ideal and doesn't produce promising results.
203203
> - The authors of CogVideoX recommend 4000 training steps and 100 training videos overall to achieve the best result. While that might yield the best results, we found from our limited experimentation that 2000 steps and 25 videos could also be sufficient.
204-
> - When using the Prodigy opitimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`.
204+
> - When using the Prodigy optimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`.
205205
> - The recommended learning rate by the CogVideoX authors and from our experimentation with Adam/AdamW is between `1e-3` and `1e-4` for a dataset of 25+ videos.
206206
>
207207
> Note that our testing is not exhaustive due to limited time for exploration. Our recommendation would be to play around with the different knobs and dials to find the best settings for your data.

examples/cogvideo/train_cogvideox_image_to_video_lora.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -879,7 +879,7 @@ def prepare_rotary_positional_embeddings(
879879

880880

881881
def get_optimizer(args, params_to_optimize, use_deepspeed: bool = False):
882-
# Use DeepSpeed optimzer
882+
# Use DeepSpeed optimizer
883883
if use_deepspeed:
884884
from accelerate.utils import DummyOptim
885885

0 commit comments

Comments
 (0)