Does vLLM support flash attention? #425
zhaoyang-star
announced in
Q&A
Replies: 1 comment
-
vLLM use xformers's memory_efficient_attention_forward, so it makes indirect use of flash attention. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Flash attention is an important optimizing method but I found no flash attention impls in vLLM code base. So does vLLM support flash attention?
Beta Was this translation helpful? Give feedback.
All reactions