Incorrect Context Window Size Detection for VS Code LM API Provider Models: Request to Consider Improvement #4737
OleynikAleksandr
started this conversation in
Feature Requests
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear Roo Code Team,
I would like to request that you consider correcting the context window size detection for models available via the VS Code LM API provider.
Currently, there appears to be a discrepancy: for example, the Gemini 2.5 Pro model has an actual context window size of 1 million tokens, whereas Roo Code indicates it as 63 800 tokens.
This difference can significantly limit the usability of the model through your tool.
I would be grateful if you could look into this issue and, if possible, make the necessary adjustments.
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions