Skip to content

Commit a74f02f

Browse files
[Docs] CogView4 comment fix (#10957)
* Update pipeline_cogview4.py * Use GLM instead of T5 in doc
1 parent 66bf7ea commit a74f02f

File tree

1 file changed

+4
-6
lines changed

1 file changed

+4
-6
lines changed

src/diffusers/pipelines/cogview4/pipeline_cogview4.py

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -143,13 +143,11 @@ class CogView4Pipeline(DiffusionPipeline):
143143
Args:
144144
vae ([`AutoencoderKL`]):
145145
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
146-
text_encoder ([`T5EncoderModel`]):
147-
Frozen text-encoder. CogView4 uses
148-
[T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
149-
[t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
150-
tokenizer (`T5Tokenizer`):
146+
text_encoder ([`GLMModel`]):
147+
Frozen text-encoder. CogView4 uses [glm-4-9b-hf](https://huggingface.co/THUDM/glm-4-9b-hf).
148+
tokenizer (`PreTrainedTokenizer`):
151149
Tokenizer of class
152-
[T5Tokenizer](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5Tokenizer).
150+
[PreTrainedTokenizer](https://huggingface.co/docs/transformers/main/en/main_classes/tokenizer#transformers.PreTrainedTokenizer).
153151
transformer ([`CogView4Transformer2DModel`]):
154152
A text conditioned `CogView4Transformer2DModel` to denoise the encoded image latents.
155153
scheduler ([`SchedulerMixin`]):

0 commit comments

Comments
 (0)