Skip to content

fix: improve VS Code LM token usage reporting for context window updates #6115

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

roomote[bot]
Copy link

@roomote roomote bot commented Jul 23, 2025

This PR fixes issue #6112 where the token count and context length progress bar don't update properly when using VS Code LM API as the API Provider.

The VS Code LM provider was only yielding token usage information once at the end of the stream, which prevented the UI from updating the context window progress bar during streaming.

Changes:

  • Added initial usage yield with input tokens at stream start
  • Added periodic token updates during streaming (every 500 chars)
  • Included cache token fields for consistency with other providers

Fixes #6112


Important

Improves token usage reporting in VsCodeLmHandler by adding initial and periodic updates during streaming, addressing issue #6112.

  • Behavior:
    • VsCodeLmHandler in vscode-lm.ts now yields initial token usage at stream start and periodic updates every 500 characters.
    • Final token usage is reported at stream end, with cache token fields set to 0.
  • Tests:
    • Updated vscode-lm.spec.ts to expect initial, periodic, and final token usage chunks.
    • Added test for getApiProtocol in provider-settings.test.ts to return 'openai' for vscode-lm provider.
  • Misc:

This description was created by Ellipsis for 048fcf7. You can customize this summary. It will automatically update as commits are pushed.

- Add initial usage yield with input tokens at stream start
- Yield periodic token updates during streaming (every 500 chars)
- Include cache token fields (set to 0) for consistency with other providers
- This ensures the context window progress bar updates properly during streaming
@roomote roomote bot requested review from mrubens, cte and jr as code owners July 23, 2025 13:31
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. bug Something isn't working labels Jul 23, 2025
@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Jul 23, 2025
The PR added an initial usage yield at the start of the stream, which
causes tests to receive 3 chunks instead of 2. Updated tests to:
- Expect 3 chunks (initial usage + text + final usage)
- Handle the new chunk ordering correctly
- Fix error handling test to account for initial usage before error
@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Prelim Review] in Roo Code Roadmap Jul 24, 2025
@hannesrudolph hannesrudolph added PR - Needs Preliminary Review and removed Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. labels Jul 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working PR - Needs Preliminary Review size:M This PR changes 30-99 lines, ignoring generated files.
Projects
Status: PR [Needs Prelim Review]
Development

Successfully merging this pull request may close these issues.

Token count or the Token progress bar doesn't work when using "VS Code LM API" as the API Provider
2 participants