You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19-1Lines changed: 19 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -23,6 +23,10 @@ python install.py
23
23
24
24
****Custom nodes from [ComfyUI-VideoHelperSuite](https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite) are required for these nodes to function properly.***
**If you installed from the ComfyUI-Manager, all the necessary models should be automatically downloaded into the `models/diffusers` folder.**
@@ -31,14 +35,28 @@ python install.py
31
35
32
36
Manually Download the [CoTracker Checkpoint](https://huggingface.co/facebook/cotracker/blob/main/cotracker2.pth) and place it in `models/cotracker` folder to use AniDoc with tracking enabled.
33
37
34
-
The nodes can be found in "AniDoc" category as "AniDocLoader", "LoadCoTracker", "GetAniDocControlnetImages", "AniDocSampler".
38
+
The nodes can be found in "AniDoc" category as `AniDocLoader`, `LoadCoTracker`, `GetAniDocControlnetImages`, `AniDocSampler`.
35
39
36
40
Take a look at the example workflow for more info.
37
41
38
42
> Currently our model expects `14 frames` video as input, so if you want to colorize your own lineart sequence, you should preprocess it into 14 frames
39
43
40
44
> However, in our test, we found that in most cases our model works well for more than 14 frames (`72 frames`)
41
45
46
+
## Showcases
47
+
48
+
*Some demos from **[the official demo page](https://yihao-meng.github.io/AniDoc_demo)**
0 commit comments