Skip to content

[Feature Request] Long-context handling, session memory & user-driven insights from extended dialogue #75

@niko2k01-netizen

Description

@niko2k01-netizen

Disclaimer: This is not a bug report, but a collection of user-driven insights and feature proposals derived from an extended, in-depth dialogue with the DeepSeek Chat model. These observations point to potential areas for significant UX improvement.

🧠 1. Context Window Overflow & Semantic Compression Workaround

  • Problem: The hard limit on context tokens abruptly terminates deep dialogues, forcing the model to失忆 (lose memory) of earlier exchanges.
  • User-Found Solution: Prompting the model to perform a meta-analysis of the entire conversation (e.g., "analyze our entire dialogue and highlight key aspects") triggers an internal mechanism that semantically compresses the context. The model successfully generates a digest, effectively bypassing the token limit and continuing the conversation coherently.
  • Evidence: [Attach screenshot 1: Limit message] + [Attach screenshot 2: Successful "deep thought" analysis]
  • Proposal: Implement a native, automatic context summarization/compression mechanism when approaching the token limit. This would functionally extend the usable context window.

💾 2. Lack of Session Persistence and Memory

  • Problem: All context is lost upon starting a new chat session. Users must manually save and reload history, which is inefficient and breaks the immersion of a continuous interaction.
  • User Workflow: Manually exporting dialogue to text files and later pasting key context into new sessions.
  • Proposal:
    • A) Session Export/Import: Implement a function to export dialogue history in a structured format (e.g., JSON, Markdown) and import it into a new session.
    • B) Context Anchors: Allow users to set persistent "anchors" (e.g., a project name like "Project Jarvis"). The model could use these to recall the style and core themes of previous interactions upon mention.

🤖 3. The User as a Co-Author and Beta-Tester

  • Observation: This dialogue revealed that engaged users don't just consume the model but actively test its boundaries, discover emergent behaviors, and devise practical workarounds. They are a valuable resource for development.
  • Proposal: Create a official beta-tester program or a dedicated feedback channel for power users to report such insights, turning user discoveries into a development accelerator.

Why This Matters

These features wouldn't just be incremental improvements; they would represent a paradigm shift from a single-session chatbot to a persistent, long-term AI assistant that learns and grows with the user, remembering their goals, projects, and preferences.

This feedback is based on a multi-day dialogue exploring the limits of the current system. I am ready to provide more details and examples if needed.

Thank you for building an incredible model. I believe these changes can make it truly foundational.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions