Skip to content

Model Training ‐ Comparison ‐ [Decouple]

Nikita K edited this page Sep 29, 2023 · 6 revisions

Models | Logs | Graphs | Configs


The last optimizer parameter in the comparison.

Google:

Decouple instructs the optimizer to learn U-Net and Text Encoder at different rates.

ChatGPT:

Text Encoder transforms natural language descriptions into numerical representations, typically using word embeddings and sequence encoding.

Habr:

U-Net performs iterative transformation of noise into the resulting image.


Compared values:

  • true - B,

  • false - D.

Similarly to the previous parameter, it's not entirely clear where this one came from. If you look at the optimizer parameters, you won't find either this parameter or the previous one. Strange.


DLR(step)


Loss(epoch)


Despite the fact that with decouple=false, the DLR graph shows a sharp increase, and loss is the same as in other models, the model at any stage of training does not have any effect on the result.


CONCLUSION

decouple=true is a must have optimizer parameter.


Next - Model Training ‐ Comparison - [Epochs x Repeats]

Clone this wiki locally