Releases: Mystfit/Unreal-StableDiffusionTools
v0.5.0 - Generator backends (hotfix1)
Added Dream Studio and Stable Horde support via new Generator bridge scripts.
To pick a generator, visit the new settings panel for the plugin located at Project Settings->Stable Diffusion Tools
and enter the token for your generator in Generator Tokens
.
NOTE: The token section of the main plugin UI has been deprecated and you MUST enter your token in the project settings panel.
v0.4.2 - Bugfixes
Full Changelog: v0.4.1...v0.4.2
v0.4.1-hotfix1
Fixed the token field throwing an error when trying to set the token for the first time.
v0.4.1 - Dependency manager
Added a dependency manager to the UI to update all/individual packages and make it easier to see logs for packages during the install process.
v0.4.0 - Inpainting support
Inpainting
Inpainting lets you fill only a masked portion of your input image whilst keeping areas outside the mask consistent. To use, load the Runway1-5_Inpaint
model preset or choose any other model and make sure that Enable inpainting
is set in the model options. To create a mask, add actors to an actor layer and then set the Inpaint layer
field in the UI to the actor layer you want to use as a mask for inpainting. If using the sequencer, you will need to set the inpaint layer in the properties of an options track and enable Render CustomDepth Pass
and set CustomDepth Stencil Value
to 1 for each actor on the layer.
IMPORTANT: When generating images from the UI panel, the input image will be too dark. This is due to the plugin now using a SceneCapture2D actor to capture the viewport to enable inpainting stencil support.
To fix this change the following change the following options on a global post processing volume:
- Metering mode = Manual
- Exposure Compensation = 15
Model Assets
You can now create StableDiffusionModelAsset
data assets to hold frequently used model settings. You can load the data assets into either the UI or a SD sequencer options section.
Discord
I've created a Discord for the plugin. Come by and say hi.
v0.3.5 - Added xformers memory efficient attention support
The pipeline now has enable_xformers_memory_efficient_attention enabled which will reduce VRAM usage and decrease inference time which will speed up image renders.
Don't forget to update your dependencies/restart Unreal to pull in the latest xformers and diffusers libraries.
v0.3.4 - NSFW filter and tab launch bugfix
NSFW filter can now be disabled in case you are receiving too many false positives when generating animations. Please use responsibly.
Fixed the plugin tab not loading correctly if the tab was visible when closing and relaunching the editor. Culprit was the plugin's init_unreal.py file was loading after the widget had already been constructed.
v0.3.3 - Sequencer upscale support
When exporting an animation using the Stable Diffusion movie pipeline, x4 upscaling is now enabled by default. Using the output dimensions of 512x512 will now result in an upscaled 2048x2048 animation.
v0.3.2 - Sequencer bugfixes
Fixed broken plugin compilation when compiling as an engine plugin
Fixed sequencer prompt serialization and incorrect curve evaluation when using custom output framerates
v0.3.1 - Weighted prompts, upsampler support, tiling textures
Weighted subprompts
- You can now create prompt rows in the plugin window and assign weights to each row that will increase the strength of that prompt in the final image output.
- Weight values can now be animated for each prompt section in the sequencer.
Upsampler support
- Added support for 4x Real-ESRGAN upsampling. An upsampling section has been added to the Image Outputs section in the plugin editor panel.
Tiling textures
- By setting the convolution padding parameter in the model options to
circular
the generator will attempt to make tileable images.
Engine plugin support
- Fixed missing categories on UPROPERTIES and UFUNCTIONS that were hindering compiling the plugin as an engine plugin.