Skip to content

Commit a36ba49

Browse files
authored
Merge branch 'main' into integrations/skyreels-v1
2 parents dcc7d01 + f070775 commit a36ba49

File tree

108 files changed

+3672
-1080
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+3672
-1080
lines changed

.github/workflows/pr_tests.yml

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@ jobs:
6464
run: |
6565
python utils/check_copies.py
6666
python utils/check_dummies.py
67+
python utils/check_support_list.py
6768
make deps_table_check_updated
6869
- name: Check if failure
6970
if: ${{ failure() }}
@@ -120,7 +121,8 @@ jobs:
120121
run: |
121122
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
122123
python -m uv pip install -e [quality,test]
123-
python -m uv pip install accelerate
124+
pip uninstall transformers -y && python -m uv pip install -U transformers@git+https://github.com/huggingface/transformers.git --no-deps
125+
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git --no-deps
124126
125127
- name: Environment
126128
run: |

.github/workflows/push_tests.yml

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,13 @@
11
name: Fast GPU Tests on main
22

33
on:
4+
pull_request:
5+
branches: main
6+
paths:
7+
- "src/diffusers/models/modeling_utils.py"
8+
- "src/diffusers/models/model_loading_utils.py"
9+
- "src/diffusers/pipelines/pipeline_utils.py"
10+
- "src/diffusers/pipeline_loading_utils.py"
411
workflow_dispatch:
512
push:
613
branches:
@@ -160,6 +167,7 @@ jobs:
160167
path: reports
161168

162169
flax_tpu_tests:
170+
if: ${{ github.event_name != 'pull_request' }}
163171
name: Flax TPU Tests
164172
runs-on:
165173
group: gcp-ct5lp-hightpu-8t
@@ -208,6 +216,7 @@ jobs:
208216
path: reports
209217

210218
onnx_cuda_tests:
219+
if: ${{ github.event_name != 'pull_request' }}
211220
name: ONNX CUDA Tests
212221
runs-on:
213222
group: aws-g4dn-2xlarge
@@ -256,6 +265,7 @@ jobs:
256265
path: reports
257266

258267
run_torch_compile_tests:
268+
if: ${{ github.event_name != 'pull_request' }}
259269
name: PyTorch Compile CUDA tests
260270

261271
runs-on:
@@ -299,6 +309,7 @@ jobs:
299309
path: reports
300310

301311
run_xformers_tests:
312+
if: ${{ github.event_name != 'pull_request' }}
302313
name: PyTorch xformers CUDA tests
303314

304315
runs-on:
@@ -349,7 +360,6 @@ jobs:
349360
container:
350361
image: diffusers/diffusers-pytorch-cuda
351362
options: --gpus 0 --shm-size "16gb" --ipc host
352-
353363
steps:
354364
- name: Checkout diffusers
355365
uses: actions/checkout@v3
@@ -359,7 +369,6 @@ jobs:
359369
- name: NVIDIA-SMI
360370
run: |
361371
nvidia-smi
362-
363372
- name: Install dependencies
364373
run: |
365374
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"

docs/source/en/api/activations.md

Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,3 +25,16 @@ Customized activation functions for supporting various models in 🤗 Diffusers.
2525
## ApproximateGELU
2626

2727
[[autodoc]] models.activations.ApproximateGELU
28+
29+
30+
## SwiGLU
31+
32+
[[autodoc]] models.activations.SwiGLU
33+
34+
## FP32SiLU
35+
36+
[[autodoc]] models.activations.FP32SiLU
37+
38+
## LinearActivation
39+
40+
[[autodoc]] models.activations.LinearActivation

docs/source/en/api/attnprocessor.md

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,3 +147,20 @@ An attention processor is a class for applying different types of attention mech
147147
## XLAFlashAttnProcessor2_0
148148

149149
[[autodoc]] models.attention_processor.XLAFlashAttnProcessor2_0
150+
151+
## XFormersJointAttnProcessor
152+
153+
[[autodoc]] models.attention_processor.XFormersJointAttnProcessor
154+
155+
## IPAdapterXFormersAttnProcessor
156+
157+
[[autodoc]] models.attention_processor.IPAdapterXFormersAttnProcessor
158+
159+
## FluxIPAdapterJointAttnProcessor2_0
160+
161+
[[autodoc]] models.attention_processor.FluxIPAdapterJointAttnProcessor2_0
162+
163+
164+
## XLAFluxFlashAttnProcessor2_0
165+
166+
[[autodoc]] models.attention_processor.XLAFluxFlashAttnProcessor2_0

docs/source/en/api/loaders/lora.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ LoRA is a fast and lightweight training method that inserts and trains a signifi
2323
- [`LTXVideoLoraLoaderMixin`] provides similar functions for [LTX-Video](https://huggingface.co/docs/diffusers/main/en/api/pipelines/ltx_video).
2424
- [`SanaLoraLoaderMixin`] provides similar functions for [Sana](https://huggingface.co/docs/diffusers/main/en/api/pipelines/sana).
2525
- [`HunyuanVideoLoraLoaderMixin`] provides similar functions for [HunyuanVideo](https://huggingface.co/docs/diffusers/main/en/api/pipelines/hunyuan_video).
26+
- [`Lumina2LoraLoaderMixin`] provides similar functions for [Lumina2](https://huggingface.co/docs/diffusers/main/en/api/pipelines/lumina2).
2627
- [`AmusedLoraLoaderMixin`] is for the [`AmusedPipeline`].
2728
- [`LoraBaseMixin`] provides a base class with several utility methods to fuse, unfuse, unload, LoRAs and more.
2829

@@ -68,6 +69,10 @@ To learn more about how to load LoRA weights, see the [LoRA](../../using-diffuse
6869

6970
[[autodoc]] loaders.lora_pipeline.HunyuanVideoLoraLoaderMixin
7071

72+
## Lumina2LoraLoaderMixin
73+
74+
[[autodoc]] loaders.lora_pipeline.Lumina2LoraLoaderMixin
75+
7176
## AmusedLoraLoaderMixin
7277

7378
[[autodoc]] loaders.lora_pipeline.AmusedLoraLoaderMixin

docs/source/en/api/normalization.md

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,3 +29,43 @@ Customized normalization layers for supporting various models in 🤗 Diffusers.
2929
## AdaGroupNorm
3030

3131
[[autodoc]] models.normalization.AdaGroupNorm
32+
33+
## AdaLayerNormContinuous
34+
35+
[[autodoc]] models.normalization.AdaLayerNormContinuous
36+
37+
## RMSNorm
38+
39+
[[autodoc]] models.normalization.RMSNorm
40+
41+
## GlobalResponseNorm
42+
43+
[[autodoc]] models.normalization.GlobalResponseNorm
44+
45+
46+
## LuminaLayerNormContinuous
47+
[[autodoc]] models.normalization.LuminaLayerNormContinuous
48+
49+
## SD35AdaLayerNormZeroX
50+
[[autodoc]] models.normalization.SD35AdaLayerNormZeroX
51+
52+
## AdaLayerNormZeroSingle
53+
[[autodoc]] models.normalization.AdaLayerNormZeroSingle
54+
55+
## LuminaRMSNormZero
56+
[[autodoc]] models.normalization.LuminaRMSNormZero
57+
58+
## LpNorm
59+
[[autodoc]] models.normalization.LpNorm
60+
61+
## CogView3PlusAdaLayerNormZeroTextImage
62+
[[autodoc]] models.normalization.CogView3PlusAdaLayerNormZeroTextImage
63+
64+
## CogVideoXLayerNormZero
65+
[[autodoc]] models.normalization.CogVideoXLayerNormZero
66+
67+
## MochiRMSNormZero
68+
[[autodoc]] models.transformers.transformer_mochi.MochiRMSNormZero
69+
70+
## MochiRMSNorm
71+
[[autodoc]] models.normalization.MochiRMSNorm

docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ from diffusers import StableDiffusion3Pipeline
7777
from transformers import SiglipVisionModel, SiglipImageProcessor
7878

7979
image_encoder_id = "google/siglip-so400m-patch14-384"
80-
ip_adapter_id = "guiyrt/InstantX-SD3.5-Large-IP-Adapter-diffusers"
80+
ip_adapter_id = "InstantX/SD3.5-Large-IP-Adapter"
8181

8282
feature_extractor = SiglipImageProcessor.from_pretrained(
8383
image_encoder_id,

examples/README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -40,9 +40,9 @@ Training examples show how to pretrain or fine-tune diffusion models for a varie
4040
| [**Text-to-Image fine-tuning**](./text_to_image) |||
4141
| [**Textual Inversion**](./textual_inversion) | ✅ | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb)
4242
| [**Dreambooth**](./dreambooth) | ✅ | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb)
43-
| [**ControlNet**](./controlnet) | ✅ | ✅ | -
44-
| [**InstructPix2Pix**](./instruct_pix2pix) | ✅ | ✅ | -
45-
| [**Reinforcement Learning for Control**](./reinforcement_learning) | - | - | coming soon.
43+
| [**ControlNet**](./controlnet) | ✅ | ✅ | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb)
44+
| [**InstructPix2Pix**](./instruct_pix2pix) | ✅ | ✅ | [Notebook](https://github.com/huggingface/notebooks/blob/main/diffusers/InstructPix2Pix_using_diffusers.ipynb)
45+
| [**Reinforcement Learning for Control**](./reinforcement_learning) | - | - | [Notebook1](https://github.com/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_for_control.ipynb), [Notebook2](https://github.com/huggingface/notebooks/blob/main/diffusers/reinforcement_learning_with_diffusers.ipynb)
4646

4747
## Community
4848

0 commit comments

Comments
 (0)