Releases: Mystfit/Unreal-StableDiffusionTools
v0.7.2 - Image overlays and live updates
-
Added live preview mode to automatically generate images when the editor viewport or a scene capture actor is moved.
-
Added depth preview overlay to visualize the depth map for depth models.
-
Added progress bar to bottom of the generated image for image generation and upscaling.
-
Made image generation cancellable.
-
Added debug flag to display input/output PIL images from within the Diffusers bridge.
-
Added new background material to represent an empty canvas.
Full Changelog: v0.7.1...v0.7.2
v0.7.1 - Input sources
Added input sources to the plugin panel.
The Viewport option will generate a new image from whatever is being displayed in the active viewport (including the toolbar, you may want to hide it first). This is the original capture method for the plugin before it was replaced by the Scene Capture Actor method.
The Scene Capture Actor option will allow you to pick from any Scene Capture 2D actor in your level to allow you more granular control on how your scene is captured before being passed to the image generator.
To use Lumen in your scene capture, make sure to enable the Global Illumination method property in your Scene Capture Actor's post-processing settings (See image below for reference).
Important: You currently need to use the Scene Capture Actor option for inpaint/depth models currently as it is how the plugin modifies the post-processing stack to capture depth maps or stencil passes.
v0.7.0 - InstructPix2Pix support
- Added InstructPix2Pix model. Use descriptive language to modify your captured scene. For example, the image above used the prompt "what would it look like if it was overgrown with foliage".
- When saving textures, an associated data asset (prefixed with DA_) is created that saves input parameters that were used to generate the original image.
Full Changelog: v0.6.0...v0.7.0
v0.6.0 - Depth model support
Added support for the StabilityAI SD depth model using the scene depth as a guide for performing depth2img image generation (see the model card page for more information about this technique).
Any model loaded with the Depth
parameter set to true will expose the depth panel in the plugin UI. Use the scene depth scale to set how much of the scene will be visible in the captured depth map. Use the Diffusers_StabilityAI2-1_depth
model as a starting point.
Full Changelog: v0.5.6...v0.6.0
v0.5.6 - Fixed diffusers bridge and added SD 2.1 presets
V0.5.5 - Fixed long prompt weighting
Models no longer worked with the LPW pipeline due due to diffusers being too old so we've reverted back to the github version.
Also removed the depreciated init_image kwarg
v0.5.4
What's Changed
- Add missing categories to the DependencyManager by @mateuszwojt in #20
- Sequencer fixes to avoid a null bridge class being used
New Contributors
- @mateuszwojt made their first contribution in #20
Full Changelog: v0.5.3...v0.5.4
v0.5.3 - UE51 Import order
Updated the import order of plugin headers to 5.1
v0.5.2 - Schedulers and updated dependency installer
Added the ability to change schedulers to a user-specified one.
Added latest diffusers library which has added support for Stable Diffusion 2.0.
Assets have been upgraded to 5.1 so moving forwards, I'll be focusing on maintaining the 5.1 version as it's a bit harder to downgrade assets to 5.0. If there is a high demand for 5.0 then I'll look into downgrading the assets.
v0.5.1 - Unreal 5.1 support
The plugin now supports the latest release version of Unreal Engine. Moving forwards, releases will include both 5.0 and 5.1 binaries due to assets saved in 5.1 not being backwards compatible with 5.0.