Skip to content

Commit 290b88d

Browse files
committed
resolve conflicts.
2 parents aaaa947 + edb8c1b commit 290b88d

File tree

241 files changed

+2742
-1157
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

241 files changed

+2742
-1157
lines changed

.github/workflows/nightly_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -272,7 +272,7 @@ jobs:
272272
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
273273
-s -v -k "not Flax and not Onnx" \
274274
--make-reports=tests_torch_minimum_version_cuda \
275-
tests/models/test_modelling_common.py \
275+
tests/models/test_modeling_common.py \
276276
tests/pipelines/test_pipelines_common.py \
277277
tests/pipelines/test_pipeline_utils.py \
278278
tests/pipelines/test_pipelines.py \

.github/workflows/pr_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ jobs:
266266
# TODO (sayakpaul, DN6): revisit `--no-deps`
267267
python -m pip install -U peft@git+https://github.com/huggingface/peft.git --no-deps
268268
python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
269-
python -m uv pip install -U tokenizers@git+https://github.com/huggingface/tokenizers.git --no-deps
269+
python -m uv pip install -U tokenizers
270270
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
271271
272272
- name: Environment

.github/workflows/release_tests_fast.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -193,7 +193,7 @@ jobs:
193193
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
194194
-s -v -k "not Flax and not Onnx" \
195195
--make-reports=tests_torch_minimum_cuda \
196-
tests/models/test_modelling_common.py \
196+
tests/models/test_modeling_common.py \
197197
tests/pipelines/test_pipelines_common.py \
198198
tests/pipelines/test_pipeline_utils.py \
199199
tests/pipelines/test_pipelines.py \

docs/source/en/api/pipelines/sana.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -59,10 +59,10 @@ Refer to the [Quantization](../../quantization/overview) overview to learn more
5959
```py
6060
import torch
6161
from diffusers import BitsAndBytesConfig as DiffusersBitsAndBytesConfig, SanaTransformer2DModel, SanaPipeline
62-
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModelForCausalLM
62+
from transformers import BitsAndBytesConfig as BitsAndBytesConfig, AutoModel
6363

6464
quant_config = BitsAndBytesConfig(load_in_8bit=True)
65-
text_encoder_8bit = AutoModelForCausalLM.from_pretrained(
65+
text_encoder_8bit = AutoModel.from_pretrained(
6666
"Efficient-Large-Model/Sana_1600M_1024px_diffusers",
6767
subfolder="text_encoder",
6868
quantization_config=quant_config,

examples/advanced_diffusion_training/README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,17 @@ write_basic_config()
6767
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
6868
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
6969

70+
Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub:
71+
```bash
72+
huggingface-cli login
73+
```
74+
This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter.
75+
76+
> [!NOTE]
77+
> In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`:
78+
> `pip install wandb`
79+
> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`.
80+
7081
### Pivotal Tuning
7182
**Training with text encoder(s)**
7283

examples/advanced_diffusion_training/README_flux.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -65,6 +65,17 @@ write_basic_config()
6565
When running `accelerate config`, if we specify torch compile mode to True there can be dramatic speedups.
6666
Note also that we use PEFT library as backend for LoRA training, make sure to have `peft>=0.6.0` installed in your environment.
6767

68+
Lastly, we recommend logging into your HF account so that your trained LoRA is automatically uploaded to the hub:
69+
```bash
70+
huggingface-cli login
71+
```
72+
This command will prompt you for a token. Copy-paste yours from your [settings/tokens](https://huggingface.co/settings/tokens),and press Enter.
73+
74+
> [!NOTE]
75+
> In the examples below we use `wandb` to document the training runs. To do the same, make sure to install `wandb`:
76+
> `pip install wandb`
77+
> Alternatively, you can use other tools / train without reporting by modifying the flag `--report_to="wandb"`.
78+
6879
### Target Modules
6980
When LoRA was first adapted from language models to diffusion models, it was applied to the cross-attention layers in the Unet that relate the image representations with the prompts that describe them.
7081
More recently, SOTA text-to-image diffusion models replaced the Unet with a diffusion Transformer(DiT). With this change, we may also want to explore

examples/community/README.md

Lines changed: 5 additions & 5 deletions
Large diffs are not rendered by default.

examples/community/adaptive_mask_inpainting.py

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -416,10 +416,14 @@ def __init__(
416416
" checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
417417
)
418418

419-
is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
420-
version.parse(unet.config._diffusers_version).base_version
421-
) < version.parse("0.9.0.dev0")
422-
is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
419+
is_unet_version_less_0_9_0 = (
420+
unet is not None
421+
and hasattr(unet.config, "_diffusers_version")
422+
and version.parse(version.parse(unet.config._diffusers_version).base_version) < version.parse("0.9.0.dev0")
423+
)
424+
is_unet_sample_size_less_64 = (
425+
unet is not None and hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
426+
)
423427
if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
424428
deprecation_message = (
425429
"The configuration file of the unet has set the default `sample_size` to smaller than"
@@ -438,7 +442,7 @@ def __init__(
438442
unet._internal_dict = FrozenDict(new_config)
439443

440444
# Check shapes, assume num_channels_latents == 4, num_channels_mask == 1, num_channels_masked == 4
441-
if unet.config.in_channels != 9:
445+
if unet is not None and unet.config.in_channels != 9:
442446
logger.info(f"You have loaded a UNet with {unet.config.in_channels} input channels which.")
443447

444448
self.register_modules(

examples/community/composable_stable_diffusion.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -132,10 +132,14 @@ def __init__(
132132
" checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
133133
)
134134

135-
is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
136-
version.parse(unet.config._diffusers_version).base_version
137-
) < version.parse("0.9.0.dev0")
138-
is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
135+
is_unet_version_less_0_9_0 = (
136+
unet is not None
137+
and hasattr(unet.config, "_diffusers_version")
138+
and version.parse(version.parse(unet.config._diffusers_version).base_version) < version.parse("0.9.0.dev0")
139+
)
140+
is_unet_sample_size_less_64 = (
141+
unet is not None and hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
142+
)
139143
if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
140144
deprecation_message = (
141145
"The configuration file of the unet has set the default `sample_size` to smaller than"

examples/community/instaflow_one_step.py

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -152,10 +152,14 @@ def __init__(
152152
" checker. If you do not want to use the safety checker, you can pass `'safety_checker=None'` instead."
153153
)
154154

155-
is_unet_version_less_0_9_0 = hasattr(unet.config, "_diffusers_version") and version.parse(
156-
version.parse(unet.config._diffusers_version).base_version
157-
) < version.parse("0.9.0.dev0")
158-
is_unet_sample_size_less_64 = hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
155+
is_unet_version_less_0_9_0 = (
156+
unet is not None
157+
and hasattr(unet.config, "_diffusers_version")
158+
and version.parse(version.parse(unet.config._diffusers_version).base_version) < version.parse("0.9.0.dev0")
159+
)
160+
is_unet_sample_size_less_64 = (
161+
unet is not None and hasattr(unet.config, "sample_size") and unet.config.sample_size < 64
162+
)
159163
if is_unet_version_less_0_9_0 and is_unet_sample_size_less_64:
160164
deprecation_message = (
161165
"The configuration file of the unet has set the default `sample_size` to smaller than"

0 commit comments

Comments
 (0)