working example of vllm using whisper? #12091
Closed
silvacarl2
announced in
Q&A
Replies: 3 comments
-
See: https://github.com/vllm-project/vllm/blob/main/examples/offline_inference/whisper.py |
Beta Was this translation helpful? Give feedback.
0 replies
-
yes we tried that. i dont think that whisper is ready yet on vllm. we installed on an A40 and it still says there is not enough memory. thats not possible. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Could you plz provide your env info? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
does anyone have a tiny working example of of vllm using whisper?
Beta Was this translation helpful? Give feedback.
All reactions