This repository was archived by the owner on Jan 28, 2025. It is now read-only.
Replies: 2 comments
-
Cool, I'll have a look into removing this and experimenting with batch size, I'm looking for performance increases now so this is perfect timing :) |
Beta Was this translation helpful? Give feedback.
0 replies
-
I don't notice a difference in performance between batch size amounts and I prefer the per-frame updating of wav2lip_batch_size=1. But still the removal of these lines is good as they aren't providing anything of use so thanks for pointing it out 👍 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Easy-Wav2Lip/inference.py
Line 639 in 752c569
Especially when I tried to enable wav2lip_batch_size( > 1) to accelerate inference, since this loop caused unexpected BGR problem🤨
Beta Was this translation helpful? Give feedback.
All reactions