Skip to content

[Feature Request] Testing with defined prefix lengths #104

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
thameem-abbas opened this issue Apr 4, 2025 · 0 comments
Open

[Feature Request] Testing with defined prefix lengths #104

thameem-abbas opened this issue Apr 4, 2025 · 0 comments

Comments

@thameem-abbas
Copy link
Contributor

With the v1 engine of vLLM enabling prefix caching by default, we would need a way to consistently test this from the client side. vLLM built-in benchmarks already have support for this so that can serve as a sample.

The throughput improvements as we know are considerable with a good hit rate.

Linking to vLLM v1 blog: https://blog.vllm.ai/2025/01/27/v1-alpha-release.html#:~:text=3.%20Zero%2DOverhead%20Prefix%20Caching

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Ready
Development

No branches or pull requests

2 participants