Replies: 1 comment 2 replies
-
@Greatz08 How are you using? Comfy or example script/Gradio? I responded here too |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I was curious to test awq-int4-flux.1-t5xxl.safetensors to save more VRAM as i have RTX 4060 - 8 GB VRAM only, but it caused CUDA out of memory, but t5xxl_fp8_e4m3fn_scaled.safetensors does work in my system. I want to understand the reason behind this issue, so if anyone has any idea why it happened then do let me know. AWQ quant might be issue which i can guess but cant confirm so asking here :-)
Beta Was this translation helpful? Give feedback.
All reactions