Skip to content

Commit 063e489

Browse files
authored
Merge branch 'main' into fastercache
2 parents 4c75017 + 3e99b56 commit 063e489

File tree

111 files changed

+9671
-188
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

111 files changed

+9671
-188
lines changed

.github/workflows/trufflehog.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,3 +13,6 @@ jobs:
1313
fetch-depth: 0
1414
- name: Secret Scanning
1515
uses: trufflesecurity/trufflehog@main
16+
with:
17+
extra_args: --results=verified,unknown
18+

docs/source/en/_toctree.yml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -89,6 +89,8 @@
8989
title: Kandinsky
9090
- local: using-diffusers/ip_adapter
9191
title: IP-Adapter
92+
- local: using-diffusers/omnigen
93+
title: OmniGen
9294
- local: using-diffusers/pag
9395
title: PAG
9496
- local: using-diffusers/controlnet
@@ -276,6 +278,8 @@
276278
title: ConsisIDTransformer3DModel
277279
- local: api/models/cogview3plus_transformer2d
278280
title: CogView3PlusTransformer2DModel
281+
- local: api/models/cogview4_transformer2d
282+
title: CogView4Transformer2DModel
279283
- local: api/models/dit_transformer2d
280284
title: DiTTransformer2DModel
281285
- local: api/models/flux_transformer
@@ -288,10 +292,14 @@
288292
title: LatteTransformer3DModel
289293
- local: api/models/lumina_nextdit2d
290294
title: LuminaNextDiT2DModel
295+
- local: api/models/lumina2_transformer2d
296+
title: Lumina2Transformer2DModel
291297
- local: api/models/ltx_video_transformer3d
292298
title: LTXVideoTransformer3DModel
293299
- local: api/models/mochi_transformer3d
294300
title: MochiTransformer3DModel
301+
- local: api/models/omnigen_transformer
302+
title: OmniGenTransformer2DModel
295303
- local: api/models/pixart_transformer2d
296304
title: PixArtTransformer2DModel
297305
- local: api/models/prior_transformer
@@ -376,6 +384,8 @@
376384
title: CogVideoX
377385
- local: api/pipelines/cogview3
378386
title: CogView3
387+
- local: api/pipelines/cogview4
388+
title: CogView4
379389
- local: api/pipelines/consisid
380390
title: ConsisID
381391
- local: api/pipelines/consistency_models
@@ -438,6 +448,8 @@
438448
title: LEDITS++
439449
- local: api/pipelines/ltx_video
440450
title: LTXVideo
451+
- local: api/pipelines/lumina2
452+
title: Lumina 2.0
441453
- local: api/pipelines/lumina
442454
title: Lumina-T2X
443455
- local: api/pipelines/marigold
@@ -448,6 +460,8 @@
448460
title: MultiDiffusion
449461
- local: api/pipelines/musicldm
450462
title: MusicLDM
463+
- local: api/pipelines/omnigen
464+
title: OmniGen
451465
- local: api/pipelines/pag
452466
title: PAG
453467
- local: api/pipelines/paint_by_example
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# CogView4Transformer2DModel
13+
14+
A Diffusion Transformer model for 2D data from [CogView4]()
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import CogView4Transformer2DModel
20+
21+
transformer = CogView4Transformer2DModel.from_pretrained("THUDM/CogView4-6B", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
22+
```
23+
24+
## CogView4Transformer2DModel
25+
26+
[[autodoc]] CogView4Transformer2DModel
27+
28+
## Transformer2DModelOutput
29+
30+
[[autodoc]] models.modeling_outputs.Transformer2DModelOutput
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License. -->
11+
12+
# Lumina2Transformer2DModel
13+
14+
A Diffusion Transformer model for 3D video-like data was introduced in [Lumina Image 2.0](https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0) by Alpha-VLLM.
15+
16+
The model can be loaded with the following code snippet.
17+
18+
```python
19+
from diffusers import Lumina2Transformer2DModel
20+
21+
transformer = Lumina2Transformer2DModel.from_pretrained("Alpha-VLLM/Lumina-Image-2.0", subfolder="transformer", torch_dtype=torch.bfloat16)
22+
```
23+
24+
## Lumina2Transformer2DModel
25+
26+
[[autodoc]] Lumina2Transformer2DModel
27+
28+
## Transformer2DModelOutput
29+
30+
[[autodoc]] models.modeling_outputs.Transformer2DModelOutput
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# OmniGenTransformer2DModel
14+
15+
A Transformer model that accepts multimodal instructions to generate images for [OmniGen](https://github.com/VectorSpaceLab/OmniGen/).
16+
17+
The abstract from the paper is:
18+
19+
*The emergence of Large Language Models (LLMs) has unified language generation tasks and revolutionized human-machine interaction. However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*
20+
21+
```python
22+
import torch
23+
from diffusers import OmniGenTransformer2DModel
24+
25+
transformer = OmniGenTransformer2DModel.from_pretrained("Shitao/OmniGen-v1-diffusers", subfolder="transformer", torch_dtype=torch.bfloat16)
26+
```
27+
28+
## OmniGenTransformer2DModel
29+
30+
[[autodoc]] OmniGenTransformer2DModel
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
-->
15+
16+
# CogView4
17+
18+
<Tip>
19+
20+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
21+
22+
</Tip>
23+
24+
This pipeline was contributed by [zRzRzRzRzRzRzR](https://github.com/zRzRzRzRzRzRzR). The original codebase can be found [here](https://huggingface.co/THUDM). The original weights can be found under [hf.co/THUDM](https://huggingface.co/THUDM).
25+
26+
## CogView4Pipeline
27+
28+
[[autodoc]] CogView4Pipeline
29+
- all
30+
- __call__
31+
32+
## CogView4PipelineOutput
33+
34+
[[autodoc]] pipelines.cogview4.pipeline_output.CogView4PipelineOutput
Lines changed: 83 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,83 @@
1+
<!-- Copyright 2024 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License. -->
14+
15+
# Lumina2
16+
17+
[Lumina Image 2.0: A Unified and Efficient Image Generative Model](https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0) is a 2 billion parameter flow-based diffusion transformer capable of generating diverse images from text descriptions.
18+
19+
The abstract from the paper is:
20+
21+
*We introduce Lumina-Image 2.0, an advanced text-to-image model that surpasses previous state-of-the-art methods across multiple benchmarks, while also shedding light on its potential to evolve into a generalist vision intelligence model. Lumina-Image 2.0 exhibits three key properties: (1) Unification – it adopts a unified architecture that treats text and image tokens as a joint sequence, enabling natural cross-modal interactions and facilitating task expansion. Besides, since high-quality captioners can provide semantically better-aligned text-image training pairs, we introduce a unified captioning system, UniCaptioner, which generates comprehensive and precise captions for the model. This not only accelerates model convergence but also enhances prompt adherence, variable-length prompt handling, and task generalization via prompt templates. (2) Efficiency – to improve the efficiency of the unified architecture, we develop a set of optimization techniques that improve semantic learning and fine-grained texture generation during training while incorporating inference-time acceleration strategies without compromising image quality. (3) Transparency – we open-source all training details, code, and models to ensure full reproducibility, aiming to bridge the gap between well-resourced closed-source research teams and independent developers.*
22+
23+
<Tip>
24+
25+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
26+
27+
</Tip>
28+
29+
## Using Single File loading with Lumina Image 2.0
30+
31+
Single file loading for Lumina Image 2.0 is available for the `Lumina2Transformer2DModel`
32+
33+
```python
34+
import torch
35+
from diffusers import Lumina2Transformer2DModel, Lumina2Text2ImgPipeline
36+
37+
ckpt_path = "https://huggingface.co/Alpha-VLLM/Lumina-Image-2.0/blob/main/consolidated.00-of-01.pth"
38+
transformer = Lumina2Transformer2DModel.from_single_file(
39+
ckpt_path, torch_dtype=torch.bfloat16
40+
)
41+
42+
pipe = Lumina2Text2ImgPipeline.from_pretrained(
43+
"Alpha-VLLM/Lumina-Image-2.0", transformer=transformer, torch_dtype=torch.bfloat16
44+
)
45+
pipe.enable_model_cpu_offload()
46+
image = pipe(
47+
"a cat holding a sign that says hello",
48+
generator=torch.Generator("cpu").manual_seed(0),
49+
).images[0]
50+
image.save("lumina-single-file.png")
51+
52+
```
53+
54+
## Using GGUF Quantized Checkpoints with Lumina Image 2.0
55+
56+
GGUF Quantized checkpoints for the `Lumina2Transformer2DModel` can be loaded via `from_single_file` with the `GGUFQuantizationConfig`
57+
58+
```python
59+
from diffusers import Lumina2Transformer2DModel, Lumina2Text2ImgPipeline, GGUFQuantizationConfig
60+
61+
ckpt_path = "https://huggingface.co/calcuis/lumina-gguf/blob/main/lumina2-q4_0.gguf"
62+
transformer = Lumina2Transformer2DModel.from_single_file(
63+
ckpt_path,
64+
quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16),
65+
torch_dtype=torch.bfloat16,
66+
)
67+
68+
pipe = Lumina2Text2ImgPipeline.from_pretrained(
69+
"Alpha-VLLM/Lumina-Image-2.0", transformer=transformer, torch_dtype=torch.bfloat16
70+
)
71+
pipe.enable_model_cpu_offload()
72+
image = pipe(
73+
"a cat holding a sign that says hello",
74+
generator=torch.Generator("cpu").manual_seed(0),
75+
).images[0]
76+
image.save("lumina-gguf.png")
77+
```
78+
79+
## Lumina2Text2ImgPipeline
80+
81+
[[autodoc]] Lumina2Text2ImgPipeline
82+
- all
83+
- __call__
Lines changed: 80 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,80 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
#
3+
# Licensed under the Apache License, Version 2.0 (the "License");
4+
# you may not use this file except in compliance with the License.
5+
# You may obtain a copy of the License at
6+
#
7+
# http://www.apache.org/licenses/LICENSE-2.0
8+
#
9+
# Unless required by applicable law or agreed to in writing, software
10+
# distributed under the License is distributed on an "AS IS" BASIS,
11+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12+
# See the License for the specific language governing permissions and
13+
# limitations under the License.
14+
-->
15+
16+
# OmniGen
17+
18+
[OmniGen: Unified Image Generation](https://arxiv.org/pdf/2409.11340) from BAAI, by Shitao Xiao, Yueze Wang, Junjie Zhou, Huaying Yuan, Xingrun Xing, Ruiran Yan, Chaofan Li, Shuting Wang, Tiejun Huang, Zheng Liu.
19+
20+
The abstract from the paper is:
21+
22+
*The emergence of Large Language Models (LLMs) has unified language generation tasks and revolutionized human-machine interaction. However, in the realm of image generation, a unified model capable of handling various tasks within a single framework remains largely unexplored. In this work, we introduce OmniGen, a new diffusion model for unified image generation. OmniGen is characterized by the following features: 1) Unification: OmniGen not only demonstrates text-to-image generation capabilities but also inherently supports various downstream tasks, such as image editing, subject-driven generation, and visual conditional generation. 2) Simplicity: The architecture of OmniGen is highly simplified, eliminating the need for additional plugins. Moreover, compared to existing diffusion models, it is more user-friendly and can complete complex tasks end-to-end through instructions without the need for extra intermediate steps, greatly simplifying the image generation workflow. 3) Knowledge Transfer: Benefit from learning in a unified format, OmniGen effectively transfers knowledge across different tasks, manages unseen tasks and domains, and exhibits novel capabilities. We also explore the model’s reasoning capabilities and potential applications of the chain-of-thought mechanism. This work represents the first attempt at a general-purpose image generation model, and we will release our resources at https://github.com/VectorSpaceLab/OmniGen to foster future advancements.*
23+
24+
<Tip>
25+
26+
Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers.md) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](../../using-diffusers/loading.md#reuse-a-pipeline) section to learn how to efficiently load the same components into multiple pipelines.
27+
28+
</Tip>
29+
30+
This pipeline was contributed by [staoxiao](https://github.com/staoxiao). The original codebase can be found [here](https://github.com/VectorSpaceLab/OmniGen). The original weights can be found under [hf.co/shitao](https://huggingface.co/Shitao/OmniGen-v1).
31+
32+
## Inference
33+
34+
First, load the pipeline:
35+
36+
```python
37+
import torch
38+
from diffusers import OmniGenPipeline
39+
40+
pipe = OmniGenPipeline.from_pretrained("Shitao/OmniGen-v1-diffusers", torch_dtype=torch.bfloat16)
41+
pipe.to("cuda")
42+
```
43+
44+
For text-to-image, pass a text prompt. By default, OmniGen generates a 1024x1024 image.
45+
You can try setting the `height` and `width` parameters to generate images with different size.
46+
47+
```python
48+
prompt = "Realistic photo. A young woman sits on a sofa, holding a book and facing the camera. She wears delicate silver hoop earrings adorned with tiny, sparkling diamonds that catch the light, with her long chestnut hair cascading over her shoulders. Her eyes are focused and gentle, framed by long, dark lashes. She is dressed in a cozy cream sweater, which complements her warm, inviting smile. Behind her, there is a table with a cup of water in a sleek, minimalist blue mug. The background is a serene indoor setting with soft natural light filtering through a window, adorned with tasteful art and flowers, creating a cozy and peaceful ambiance. 4K, HD."
49+
image = pipe(
50+
prompt=prompt,
51+
height=1024,
52+
width=1024,
53+
guidance_scale=3,
54+
generator=torch.Generator(device="cpu").manual_seed(111),
55+
).images[0]
56+
image.save("output.png")
57+
```
58+
59+
OmniGen supports multimodal inputs.
60+
When the input includes an image, you need to add a placeholder `<img><|image_1|></img>` in the text prompt to represent the image.
61+
It is recommended to enable `use_input_image_size_as_output` to keep the edited image the same size as the original image.
62+
63+
```python
64+
prompt="<img><|image_1|></img> Remove the woman's earrings. Replace the mug with a clear glass filled with sparkling iced cola."
65+
input_images=[load_image("https://raw.githubusercontent.com/VectorSpaceLab/OmniGen/main/imgs/docs_img/t2i_woman_with_book.png")]
66+
image = pipe(
67+
prompt=prompt,
68+
input_images=input_images,
69+
guidance_scale=2,
70+
img_guidance_scale=1.6,
71+
use_input_image_size_as_output=True,
72+
generator=torch.Generator(device="cpu").manual_seed(222)).images[0]
73+
image.save("output.png")
74+
```
75+
76+
## OmniGenPipeline
77+
78+
[[autodoc]] OmniGenPipeline
79+
- all
80+
- __call__

docs/source/en/api/utilities.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,3 +45,7 @@ Utility and helper functions for working with 🤗 Diffusers.
4545
## apply_layerwise_casting
4646

4747
[[autodoc]] hooks.layerwise_casting.apply_layerwise_casting
48+
49+
## apply_group_offloading
50+
51+
[[autodoc]] hooks.group_offloading.apply_group_offloading

0 commit comments

Comments
 (0)