Skip to content

Context Shifting #5265

@aarongerber

Description

@aarongerber

#4588 was closed as stale. As soon as you hit context limits, being able to toggle this on would be very nice. Right now, when doing longer sessions, I end up switching to KoboldCPP.

Below is the previous ticket's contents:
About 10 days ago, KoboldCpp added a feature called Context Shifting which is supposed to greatly reduce reprocessing. Here is their official description of the feature:

NEW FEATURE: Context Shifting (A.K.A. EvenSmarterContext) - This feature utilizes KV cache shifting to automatically remove old tokens from context and add new ones without requiring any reprocessing. So long as you use no memory/fixed memory and don't use world info, you should be able to avoid almost all reprocessing between consecutive generations even at max context. This does not consume any additional context space, making it superior to SmartContext.

Any chance this gets added to Ooba as well?

Additional Context

Reddit thread: https://www.reddit.com/r/LocalLLaMA/comments/17ni4hm/koboldcpp_v148_context_shifting_massively_reduced/
llama.cpp pull: ggml-org/llama.cpp#3228
kobold.cpp 1.48.1 release: https://github.com/LostRuins/koboldcpp/releases/tag/v1.48.1

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions