Skip to content

🎨 Remove new_token_ids from warmup #292

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

prashantgupta24
Copy link
Collaborator

@prashantgupta24 prashantgupta24 commented Jul 9, 2025

Description

🎨 Remove new_token_ids from warmup since new_token_ids are not used anymore.

Copy link

github-actions bot commented Jul 9, 2025

👋 Hi! Thank you for contributing to vLLM support on Spyre.
Just a reminder: Make sure that your code passes all the linting checks, otherwise your PR won't be able to be merged. To do so, first install the linting requirements, then run format.sh and commit the changes. This can be done with uv directly:

uv sync --frozen --group lint --active --inexact

Or this can be done with pip:

uv pip compile --group lint > requirements-lint.txt
pip install -r requirements-lint.txt
bash format.sh

Now you are good to go 🚀

@prashantgupta24 prashantgupta24 changed the title 🔥 remove new_token_ids from warmup decode [WIP] Fix the compiler issue with the new changes Jul 9, 2025
@prashantgupta24
Copy link
Collaborator Author

bot:test
MARKERS="cb and spyre"

@prashantgupta24 prashantgupta24 changed the title [WIP] Fix the compiler issue with the new changes 🎨 Remove new_token_ids from warmup Jul 9, 2025
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
num_computed_tokens.append(prompt_len)
cached_request_data = CachedRequestData(
req_ids=req_ids,
resumed_from_preemption=False,
new_token_ids=new_token_ids,
new_token_ids=[[] for _ in range(len(dummy_requests))],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this need to be set to anything? I'm actually not clear on whether or not the prefill pass returns a first sampled token which may be cached here

Copy link
Collaborator Author

@prashantgupta24 prashantgupta24 Jul 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The prefill pass does return a sampled token, but that caching happens within the execute_model and hence we don't need to pass in new_token_ids anymore.

Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
Signed-off-by: Prashant Gupta <prashantgupta@us.ibm.com>
@joerunde
Copy link
Collaborator

@prashantgupta24 do we wanna get this merged?

If so we should definitely test to triple-check that this works with the upcoming compiler changes for continuous batching

@prashantgupta24
Copy link
Collaborator Author

No hurry as such, don't want to add anything before the release lol

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants