v0.2.0 #385
a-r-r-o-w
announced in
Announcements
v0.2.0
#385
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Finetrainers v0.1.0 🧪
New trainers
wan-t2v-image-conditioning.mp4
The training involves adding extra input channels to the patch embedding layer (referred to as the "control injection" layer in finetrainers), to mix conditioning features into the latent stream. This architecture choice is very common and has been seen before in many models - CogVideoX-I2V, HunyuanVideo-I2V, Alibaba's Fun Control models, etc. Due to the popularity and simplicity in the architecture choice, it is a good choice to support standalone as a trainer.
New models supported
Attention
Support for multiple different attention providers for training and inference - Pytorch native,
flash-attn
,sageattention
,xformers
,flex
. See docs for more details.Other major changes
What's Changed
flux.md
by @DarkSharpness in [Doc] Fix a typo offlux.md
#363New Contributors
flux.md
#363Full Changelog: v0.1.0...v0.2.0
This discussion was created from the release v0.2.0.
Beta Was this translation helpful? Give feedback.
All reactions