SDXL Prompt Example #87
Replies: 4 comments 1 reply
-
So after do a bunch of trial and error testing it looks like either AnimateDiff or the model doesn't like running at 1024 - if I run at 1024 I get a lot of out of memory errors, the background don't load up, and when they do they don't good. If I drop the resolution to 512 the animations work better, background show up at even 20 steps and I have been able to create very long animations, however the images look really bad - this is normal for SDXL models that are all designed for 1024x1024. |
Beta Was this translation helpful? Give feedback.
-
So after messing with this for several days -- then switching back to 1.5 - I cannot recommend using SDXL at it's current state. It takes FOREVER to generate an animation 2hrs vs 20 minutes and the quality of both the animations and the images are far superior on 1.5 |
Beta Was this translation helpful? Give feedback.
-
Hey, sorry I didn't remember to reply for this, the Issues tab is where I usually get notifications for this so it slipped from my mind! I didn't have the time to make an example for the readme yet, but here's a guide by Inner Reflections AI that experimented with HotshotXL use with AnimateDiff-Evolved repo: https://civitai.com/articles/2601 |
Beta Was this translation helpful? Give feedback.
-
No problem. was more just sharing my findings. I'll take a look at that article and see if I can get a flow that works up and running and report back! 😄 |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Spent a bit of time trying to get this to work with my SDXL Pipeline - still working out some of the kinks, but it's working! In addition to the standard items needed I am also using SeargeSDXL & Comfyroll, but these can easily be replaced with standard components. Goes through both a base and refiner phase.
The upscale take FOREVER so probably best to by-pass it if you are just testing things, or cancel if you don't like what you see in the preview. The preview image is really nice for that because it does let you see all the frames and you can quickly tell if one of the images is really messed up, or not showing what you want without having to wait for the upscaler.
With 60 steps (48/12) it takes me about 2 hours to process the 32 frames, and another 20 minutes to upscale. I found that having the steps between 40 and 60 gives better background detail, but takes a LOT longer. The animation doesn't change much if you stick with the same seed, so I've been messing around with 16 frames and 20 steps first, then when I find what I want bump it up to 32 frames, finally when I have what I like I bump it up to 60 steps and turn on the upscaler.
Running with 24G of v-ram I am able to do 32 frames, if I bump it up to 64 I get an out of memory error. Never had that issue before, so not sure if it's something with AniDiff or a change in ComfyUI... Also possible I never tried to batch at 64 before 🤣 - Haven't tried the numbers in the middle - there is probably a sweet spot (48?)
Still trying to figure out all the settings, so if anyone has any advice to get this working better feel free to share.
This is an example of 16 frames - 60 steps. She is supposed to be jumping over a river -- still trying to hone in on a good prompt - they don't seem to work as well (yet) with the SDXL model vs the older ones. The 32 frame one is too big to upload here :'(
Beta Was this translation helpful? Give feedback.
All reactions