-
Notifications
You must be signed in to change notification settings - Fork 6.1k
ensure dtype match between diffused latents and vae weights #8391
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Thanks for your PR. Does it only when using the Sigma pipeline? Would something like this would be more prudent to implement? diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py Line 1264 in 6be43bd
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
this also occurs under SD 1.x/2.x and SDXL under here is an error seen when using SDXL Refiner:
|
#7886 is same/similar |
I don't know much about the background to a force_upcast config param. I do know I have had this issue in PixArt pipelines (maybe alpha too?) a few times. This fix seems simple and I don't see any downside. |
Will defer to @yiyixuxu for an opinion on how to best proceed. IMO, we should handle in the same way as diffusers/src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_diffusion_xl.py Line 1264 in 6be43bd
|
do you mean to provide a conditional check instead of unconditionally casting it to the vae's dtype? or do you mean we should set force_upcast in a certain situation? for the former, i'm curious what problems you foresee with doing it unconditionally. it's not that having a check would hurt, but i also don't see it hurting anything to ensure the latents are equal to the vae dtype before decode. for the latter, this is a situation where upcasting the vae to be the same as the latents is unnecessary, eg. i am using the fp16 fixed SDXL VAE for decode, and upcasting will just waste resources. the problem is that the latents become fp32 after being modified by the pipeline just a few lines prior to the decode, but the vae itself is bf16. tl;dr i think casting to the vae dtype is the correct solution rather than upcasting vae to the latents dtype. |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
i thought / hoped maybe it'd been fixed, but when trying to use the upstream vanilla diffusers pipelines for vae decode during training, it's still hitting this issue (even with Accelerate) |
@bghira Can you share a minimal reproduction? |
nope i am not sure what causes the dtype switch. i think it is HF Accelerate. but the latents are fp32 and vae is bf16. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry for the delay!
sorry for the delay! I somehow missed this PR, the fix is simple and should not cause any problems, and this is different from the situation where we need to upcast vae in fp32 however, would like to know more when latents get upcasted in fp32 with accelerate @heyalexchoi or @bghira in a follow-up PR, if you could provide a minimal code example that'd be great |
What does this PR do?
Simple fix to diffused latent dtype not matching vae weights dtype. See error below. I had this issue when loading pipeline in bfloat16 and using accelerate.
Fixes # (issue)
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.