Replies: 3 comments 1 reply
-
I also have the same question. Here's som info I founded: https://www.youtube.com/watch?v=zucLSuCvqM8 |
Beta Was this translation helpful? Give feedback.
-
I tested it on my own. It looks like G is good in details/concept and L is good in image content. L is like "Charcter with long green hair on white background" G is bad in things like character traits (age, clothes, etc.) L is good in this things. G is better than L in complicated concept things. You can add characters in L and then change what they doing in G. |
Beta Was this translation helpful? Give feedback.
-
In practice, lots of users are manually copying and pasting the same prompt into both parameters. Computers exist to automate. Let's create an option to automatically sync these fields. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I notice that there are two inputs
text_g
andtext_l
toCLIPTextEncodeSDXL
. I skimmed through the SDXL technical report and I think these two are for OpenCLIP ViT-bigG and CLIP ViT-L.I don't know why Stability wants two CLIPs, but I think the input to the two CLIPs can be the same. So, @comfyanonymous perhaps can you tell us the motivation of allowing the two CLIPs to have different inputs? Did you find interesting usage?
https://github.com/comfyanonymous/ComfyUI/blame/4a77fcd6ab01d69e18c384faa29ae1c3d02237f3/comfy_extras/nodes_clip_sdxl.py#L41C5-L41C5
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions