Prior preservation #1782
Unanswered
abc123desygn
asked this question in
Q&A
Replies: 2 comments
-
its dirty |
Beta Was this translation helpful? Give feedback.
0 replies
-
Early stopping the text encoder training is a better regularization method |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I notice people complaining about overfitting and settings, is there any particular reason why the prior preservation flag isn't used with the dreambooth training script e.g '--with_prior_preservation --prior_loss_weight=1.0' and the notebook discourages the use of regularization when training faces?
I'm referring to this article from Huggingface https://huggingface.co/blog/dreambooth
Beta Was this translation helpful? Give feedback.
All reactions