mulri requesst & bug with voice chnage #978
habibaezz01
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi, thanks for the great work on this model!
I had two questions:
Concurrent/Async Request Handling:
Is there any plan or recommended approach to support multiple concurrent inference requests efficiently? Specifically, is there any guidance on integrating something like vLLM or another async-compatible backend to allow better scalability?
Voice Change Issue:
I've noticed that the voice output sometimes changes unexpectedly, even with the same input settings.
Beta Was this translation helpful? Give feedback.
All reactions