File tree Expand file tree Collapse file tree 1 file changed +4
-6
lines changed
src/diffusers/pipelines/cogview4 Expand file tree Collapse file tree 1 file changed +4
-6
lines changed Original file line number Diff line number Diff line change @@ -143,13 +143,11 @@ class CogView4Pipeline(DiffusionPipeline):
143
143
Args:
144
144
vae ([`AutoencoderKL`]):
145
145
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
146
- text_encoder ([`T5EncoderModel`]):
147
- Frozen text-encoder. CogView4 uses
148
- [T5](https://huggingface.co/docs/transformers/model_doc/t5#transformers.T5EncoderModel); specifically the
149
- [t5-v1_1-xxl](https://huggingface.co/PixArt-alpha/PixArt-alpha/tree/main/t5-v1_1-xxl) variant.
150
- tokenizer (`T5Tokenizer`):
146
+ text_encoder ([`GLMModel`]):
147
+ Frozen text-encoder. CogView4 uses [glm-4-9b-hf](https://huggingface.co/THUDM/glm-4-9b-hf).
148
+ tokenizer (`PreTrainedTokenizer`):
151
149
Tokenizer of class
152
- [T5Tokenizer ](https://huggingface.co/docs/transformers/model_doc/t5 #transformers.T5Tokenizer ).
150
+ [PreTrainedTokenizer ](https://huggingface.co/docs/transformers/main/en/main_classes/tokenizer #transformers.PreTrainedTokenizer ).
153
151
transformer ([`CogView4Transformer2DModel`]):
154
152
A text conditioned `CogView4Transformer2DModel` to denoise the encoded image latents.
155
153
scheduler ([`SchedulerMixin`]):
You can’t perform that action at this time.
0 commit comments