Replies: 1 comment 3 replies
-
KSampler is solely responsible for converting the input latent to the output latent using the diffusion model. Anything related to transparency needs to be handled in the pixel space, not in the latent space. |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I know there is node for removing background and other such nodes, BUT they are unable removing background for example between hair or other smaller parts.
Thus it'd be great if we have KSampler where we could check some option, like for example "make transparent background", so only the main object would be created and everything else would have alpha value depending on the need, so not 0/1, but rather values in between - floats - for cases like hair or other such stuff that is normally impossible to remove background from.
This way we could TRUELY COMPOSITING from different checkpoint types (SD15, SDXL, Flux, Wan etc.) combined into one final output with seamless integration and do some other great things with it where simple remove background can not do the trick good enough!
For example we could render video of a male in Flux with true transparent background (KSampler with transparent bg option), a video of a female in WAN with true transparent background (KSampler with transparent bg option) and combine (composite) them together with another let's say static image generated with SD15, all together into one seamless smooth output.
I hope you get what I mean...
Beta Was this translation helpful? Give feedback.
All reactions