๐ Our comprehensive survey paper on Context Engineering is coming soon! Stay tuned for the latest academic insights and theoretical foundations.
A comprehensive survey and collection of resources on Context Engineering - the evolution from static prompting to dynamic, context-aware AI systems.
This project is ongoing and continuously evolving. While we strive for accuracy and completeness, there may be errors, omissions, or outdated information. We welcome corrections, suggestions, and contributions from the community. Please stay tuned for regular updates and improvements.
- [2025.7] Repository initialized with comprehensive outline
- [2025.7] Survey structure established following modern context engineering paradigms
In the era of Large Language Models (LLMs), the limitations of static prompting have become increasingly apparent. Context Engineering represents the natural evolution to address LLM uncertainty and achieve production-grade AI deployment. Unlike traditional prompt engineering, context engineering encompasses the complete information payload provided to LLMs at inference time, including all structured informational components necessary for plausible task completion.
This repository serves as a comprehensive survey of context engineering techniques, methodologies, and applications.
- Related Survey
- Definition of Context Engineering
- Why Context Engineering?
- Contextual Components, Techniques and Architectures
- Implementation, Challenges, and Mitigation Strategies
- Evaluation Paradigms for Context-Driven Systems
- Applications and Systems
- Limitations and Future Directions
General AI Survey Papers
- A Survey of Large Language Models, Zhao et al.,
- The Prompt Report: A Systematic Survey of Prompt Engineering Techniques, Schulhoff et al.,
- A Systematic Survey of Prompt Engineering in Large Language Models: Techniques and Applications, Sahoo et al.,
- A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models, Gao et al.,
Context and Reasoning
- A Survey on In-context Learning, Dong et al.,
- The Mystery of In-Context Learning: A Comprehensive Survey on Interpretation and Analysis, Zhou et al.,
- A Comprehensive Survey of Retrieval-Augmented Generation (RAG): Evolution, Current Landscape and Future Directions, Gupta et al.,
- Retrieval-Augmented Generation for Large Language Models: A Survey, Gao et al.,
- A Survey on Knowledge-Oriented Retrieval-Augmented Generation, Cheng et al.,
Memory Systems and Context Persistence
- A Survey on the Memory Mechanism of Large Language Model based Agents, Zhang et al.,
- From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs, Wu et al.,
Foundational Survey Papers from Major Venues
- AUTOPROMPT: Eliciting Knowledge from Language Models with Automatically Generated Prompts, Shin et al.,
- The Power of Scale for Parameter-Efficient Prompt Tuning, Lester et al.,
- Prefix-Tuning: Optimizing Continuous Prompts for Generation, Li et al.,
- An Explanation of In-context Learning as Implicit Bayesian Inference, Xie et al.,
- Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?, Min et al.,
Additional RAG and Retrieval Surveys
- Retrieval-Augmented Generation for AI-Generated Content: A Survey, Various,
- Retrieval Augmented Generation (RAG) and Beyond: A Comprehensive Survey on How to Make your LLMs use External Data More Wisely, Various,
- Large language models (LLMs): survey, technical frameworks, and future challenges, Various,
Context is not just the single prompt users send to an LLM. Context is the complete information payload provided to a LLM at inference time, encompassing all structured informational components that the model needs to plausibly accomplish a given task.
To formally define Context Engineering, we must first mathematically characterize the LLM generation process. Let us model an LLM as a probabilistic function:
Where:
-
$\text{context}$ represents the complete input information provided to the LLM -
$\text{output}$ represents the generated response sequence -
$P(\text{token}_t | \text{previous tokens}, \text{context})$ is the probability of generating each token given the context
In traditional prompt engineering, the context is treated as a simple string:
However, in Context Engineering, we decompose the context into multiple structured components:
Where
-
$\text{instructions}$ : System prompts and rules -
$\text{knowledge}$ : Retrieved relevant information -
$\text{tools}$ : Available function definitions -
$\text{memory}$ : Conversation history and learned facts -
$\text{state}$ : Current world/user state -
$\text{query}$ : User's immediate request
Context Engineering is formally defined as the optimization problem:
Subject to constraints:
-
$|\text{context}| \leq \text{MaxTokens}$ (context window limitation) $\text{knowledge} = \text{Retrieve}(\text{query}, \text{database})$ - $\text{memory} = \text{Select}(\text{history}, \text{query})$
$\text{state} = \text{Extract}(\text{world})$
Where:
-
$\text{Reward}$ measures the quality of generated responses -
$\text{Retrieve}$ , $\text{Select}$,$\text{Extract}$ are functions for information gathering
The context assembly can be decomposed as:
Where
Context Engineering is therefore the discipline of designing and optimizing these assembly and formatting functions to maximize task performance.
From this formalization, we derive four fundamental principles:
-
System-Level Optimization: Context generation is a multi-objective optimization problem over assembly functions, not simple string manipulation.
-
Dynamic Adaptation: The context assembly function adapts to each
$\text{query}$ and$\text{state}$ at inference time:$\text{Assemble}(\cdot | \text{query}, \text{state})$ . -
Information-Theoretic Optimality: The retrieval function maximizes relevant information:
$\text{Retrieve} = \arg\max \text{Relevance}(\text{knowledge}, \text{query})$ . -
Structural Sensitivity: The formatting functions encode structure that aligns with LLM processing capabilities.
Context Engineering can be formalized within a Bayesian framework where the optimal context is inferred:
Where:
-
$P(\text{query} | \text{context})$ models query-context compatibility -
$P(\text{context} | \text{history}, \text{world})$ represents prior context probability
The optimal context assembly becomes:
This Bayesian formulation enables:
- Uncertainty Quantification: Modeling confidence in context relevance
- Adaptive Retrieval: Updating context beliefs based on feedback
- Multi-step Reasoning: Maintaining context distributions across interactions
Dimension | Prompt Engineering | Context Engineering |
---|---|---|
Mathematical Model |
|
|
Optimization Target | ||
Complexity |
|
|
Information Theory | Fixed information content | Adaptive information maximization |
State Management | Stateless function | Stateful with |
Scalability | Linear in prompt length | Sublinear through compression/filtering |
Error Analysis | Manual prompt inspection | Systematic evaluation of assembly components |
The evolution from prompt engineering to context engineering represents a fundamental maturation in AI system design. As influential figures like Andrej Karpathy, Tobi Lutke, and Simon Willison have argued, the term "prompt engineering" has been diluted to mean simply "typing things into a chatbot," failing to capture the complexity required for industrial-strength LLM applications.
Most failures in modern agentic systems are no longer attributable to core model reasoning capabilities but are instead "context failures". The true engineering challenge lies not in what question to ask, but in ensuring the model has all necessary background, data, tools, and memory to answer meaningfully and reliably.
While prompt engineering suffices for simple, self-contained tasks, it breaks down when scaled to:
- Complex, multi-step applications
- Data-rich enterprise environments
- Stateful, long-running workflows
- Multi-user, multi-tenant systems
Context Engineering provides the architectural foundation for managing state, integrating diverse data sources, and maintaining coherence across these demanding scenarios.
Traditional prompting treats context as a static string, but enterprise applications require:
- Dynamic Information Assembly: Context created on-the-fly, tailored to specific users and queries
- Multi-Source Integration: Combining databases, APIs, documents, and real-time data
- State Management: Maintaining conversation history, user preferences, and workflow status
- Tool Orchestration: Coordinating external function calls and API interactions
If prompt engineering is writing a single line of dialogue for an actor, context engineering is the entire process of building the set, designing lighting, providing detailed backstory, and directing the scene. The dialogue only achieves its intended impact because of the rich, carefully constructed environment surrounding it.
LLMs are essentially "brains in a vat" - powerful reasoning engines lacking connection to specific environments. Context Engineering provides:
- Synthetic Sensory Systems: Retrieval mechanisms as artificial perception
- Proxy Embodiment: Tool use as artificial action capabilities
- Artificial Memory: Structured information storage and retrieval
Context Engineering addresses the fundamental challenge of information retrieval where the "user" is not human but an AI agent. This requires:
- Semantic Understanding: Bridging the gap between intent and expression
- Relevance Optimization: Ranking and filtering vast knowledge bases
- Query Transformation: Converting ambiguous requests into precise retrieval operations
Enterprise applications demand:
- Deterministic Behavior: Predictable outputs across different contexts and users
- Error Handling: Graceful degradation when information is incomplete or contradictory
- Audit Trails: Transparency in how context influences model decisions
- Compliance: Meeting regulatory requirements for data handling and decision making
Context Engineering enables:
- Cost Optimization: Strategic choice between RAG and long-context approaches
- Latency Management: Efficient information retrieval and context assembly
- Resource Utilization: Optimal use of finite context windows and computational resources
- Maintenance Scalability: Systematic approaches to updating and managing knowledge bases
Context Engineering elevates AI development from a collection of "prompting tricks" to a rigorous discipline of systems architecture. It applies decades of knowledge in operating system design, memory management, and distributed systems to the unique challenges of LLM-based applications.
This discipline is foundational for unlocking the full potential of LLMs in production systems, enabling the transition from one-off text generation to autonomous agents and sophisticated AI copilots that can reliably operate in complex, dynamic environments.
Position Interpolation and Extension Techniques
- Extending Context Window of Large Language Models via Position Interpolation, Chen et al.,
- YaRN: Efficient Context Window Extension of Large Language Models, Peng et al.,
- LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens, Ding et al.,
- LongRoPE2: Near-Lossless LLM Context Window Scaling, Shang et al.,
Memory-Efficient Attention Mechanisms
- Fast Multipole Attention: A Divide-and-Conquer Attention Mechanism for Long Sequences, Kang et al.,
- Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention, Munkhdalai et al.,
- DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads, Xiao et al.,
- Star Attention: Efficient LLM Inference over Long Sequences, Acharya et al.,
Ultra-Long Sequence Processing (100K+ Tokens)
- TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation, Wu et al.,
- LongHeads: Multi-Head Attention is Secretly a Long Context Processor, Lu et al.,
- โBench: Extending Long Context Evaluation Beyond 100K Tokens, Bai et al.,
Comprehensive Extension Surveys and Methods
- Beyond the Limits: A Survey of Techniques to Extend the Context Length in Large Language Models, Various,
- A Controlled Study on Long Context Extension and Generalization in LLMs, Various,
- Selective Attention: Enhancing Transformer through Principled Context Control, Various,
Vision-Language Models with Sophisticated Context Understanding
- Towards LLM-Centric Multimodal Fusion: A Survey on Integration Strategies and Techniques, An et al.,
- Comprehending Multimodal Content via Prior-LLM Context Fusion, Wang et al.,
- V2PE: Improving Multimodal Long-Context Capability of Vision-Language Models with Variable Visual Position Encoding, Dai et al.,
- Flamingo: a Visual Language Model for Few-Shot Learning, Alayrac et al.,
Audio-Visual Context Integration and Processing
- Aligned Better, Listen Better for Audio-Visual Large Language Models, Guo et al.,
- AVicuna: Audio-Visual LLM with Interleaver and Context-Boundary Alignment for Temporal Referential Dialogue, Chen et al.,
- SonicVisionLM: Playing Sound with Vision Language Models, Xie et al.,
- SAVEn-Vid: Synergistic Audio-Visual Integration for Enhanced Understanding in Long Video Context, Li et al.,
Multi-Modal Prompt Engineering and Context Design
- CaMML: Context-Aware Multimodal Learner for Large Models, Chen et al.,
- Visual In-Context Learning for Large Vision-Language Models, Zhou et al.,
- CAMA: Enhancing Multimodal In-Context Learning with Context-Aware Modulated Attention, Li et al.,
CVPR 2024 Vision-Language Advances
- CogAgent: A Visual Language Model for GUI Agents, Various,
- LISA: Reasoning Segmentation via Large Language Model, Various,
- Reproducible scaling laws for contrastive language-image learning, Various,
Video and Temporal Understanding
Knowledge Graph-Enhanced Language Models
- Learn Together: Joint Multitask Finetuning of Pretrained KG-enhanced LLM for Downstream Tasks, Martynova et al.,
- Knowledge Graph Tuning: Real-time Large Language Model Personalization based on Human Feedback, Sun et al.,
- Knowledge Graph-Guided Retrieval Augmented Generation, Zhu et al.,
- KGLA: Knowledge Graph Enhanced Language Agents for Customer Service, Anonymous et al.,
Graph Neural Networks Combined with Language Models
- Are Large Language Models In-Context Graph Learners?, Li et al.,
- Let's Ask GNN: Empowering Large Language Model for Graph In-Context Learning, Hu et al.,
- GL-Fusion: Rethinking the Combination of Graph Neural Network and Large Language model, Yang et al.,
- NT-LLM: A Novel Node Tokenizer for Integrating Graph Structure into Large Language Models, Ji et al.,
Structured Data Integration
- CoddLLM: Empowering Large Language Models for Data Analytics, Authors et al.,
- Structure-Guided Large Language Models for Text-to-SQL Generation, Authors et al.,
- StructuredRAG: JSON Response Formatting with Large Language Models, Authors et al.,
Foundational KG-LLM Integration Methods
- Unifying Large Language Models and Knowledge Graphs: A Roadmap, Various,
- Combining Knowledge Graphs and Large Language Models, Various,
- All Against Some: Efficient Integration of Large Language Models for Message Passing in Graph Neural Networks, Various,
- Large Language Models for Graph Learning, Various,
Self-Supervised Context Generation and Augmentation
- SelfCite: Self-Supervised Alignment for Context Attribution in Large Language Models, Chuang et al.,
- Self-Supervised Prompt Optimization, Xiang et al.,
- SCOPE: A Self-supervised Framework for Improving Faithfulness in Conditional Text Generation, Duong et al.,
Reasoning Models That Generate Their Own Context
- Self-Consistency Improves Chain of Thought Reasoning in Language Models, Wang et al.,
- Tree of Thoughts: Deliberate Problem Solving with Large Language Models, Yao et al.,
- Rethinking Chain-of-Thought from the Perspective of Self-Training, Wu et al.,
- Autonomous Tree-search Ability of Large Language Models, Authors et al.,
Iterative Context Refinement and Self-Improvement
- Self-Refine: Iterative Refinement with Self-Feedback, Madaan et al.,
- Reflect, Retry, Reward: Self-Improving LLMs via Reinforcement Learning, Authors et al.,
- Large Language Models Can Self-Improve in Long-context Reasoning, Li et al.,
Meta-Learning and Autonomous Context Evolution
- Meta-in-context learning in large language models, Coda-Forno et al.,
- EvoPrompt: Connecting LLMs with Evolutionary Algorithms Yields Powerful Prompt Optimizers, Guo et al.,
- AutoPDL: Automatic Prompt Optimization for LLM Agents, Spiess et al.,
Foundational Chain-of-Thought Research
Foundational RAG Systems
- Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks, Lewis et al.,
- A Survey on Knowledge-Oriented Retrieval-Augmented Generation, Cheng et al.,
- A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models, Ding et al.,
Graph-Based RAG Systems
- GFM-RAG: Graph Foundation Model for Retrieval Augmented Generation, Luo et al.,
- GRAG: Graph Retrieval-Augmented Generation, Hu et al.,
- HybridRAG: A Hybrid Retrieval System for RAG Combining Vector and Graph Search, Sarabesh,
Multi-Agent and Hierarchical RAG
- HM-RAG: Hierarchical Multi-Agent Multimodal Retrieval Augmented Generation, Liu et al.,
- MultiHop-RAG: Benchmarking Retrieval-Augmented Generation for Multi-Hop Queries, Tang & Yang,
- MMOA-RAG: Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning, Chen et al.,
Real-Time and Streaming RAG
- StreamingRAG: Real-time Contextual Retrieval and Generation Framework, Sankaradas et al.,
- Multi-task Retriever Fine-tuning for Domain-Specific and Efficient RAG, Authors,
Persistent Memory Architecture
- MemGPT: Towards LLMs as Operating Systems, Packer et al.,
- Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory, Taranjeet et al.,
- MemoryLLM: Towards Self-Updatable Large Language Models, Wang et al.,
Memory-Augmented Neural Networks
- Survey on Memory-Augmented Neural Networks: Cognitive Insights to AI Applications, Khosla et al.,
- A Machine with Short-Term, Episodic, and Semantic Memory Systems, Kim et al.,
- From Human Memory to AI Memory: A Survey on Memory Mechanisms in the Era of LLMs, Wu et al.,
Episodic Memory and Context Persistence
- The Role of Memory in LLMs: Persistent Context for Smarter Conversations, Porcu,
- Episodic Memory in AI Agents Poses Risks that Should Be Studied and Mitigated, Christiano et al.,
Agent Interoperability Protocols
- A survey of agent interoperability protocols: Model Context Protocol (MCP), Agent Communication Protocol (ACP), and Agent-to-Agent Protocol (A2A), Zhang et al.,
- Expressive Multi-Agent Communication via Identity-Aware Learning, Du et al.,
- Context-aware Communication for Multi-agent Reinforcement Learning (CACOM), Li et al.,
Structured Communication Frameworks
- Learning Structured Communication for Multi-Agent Reinforcement Learning, Wang et al.,
- AC2C: Adaptively Controlled Two-Hop Communication for Multi-Agent Reinforcement Learning, Wang et al.,
- Task-Agnostic Contrastive Pre-Training for Inter-Agent Communication, Sun et al.,
LLM-Enhanced Agent Communication
- ProAgent: Building Proactive Cooperative Agents with Large Language Models, Zhang et al.,
- Model Context Protocol (MCP), Anthropic,
Foundational Tool Learning
- Toolformer: Language Models Can Teach Themselves to Use Tools, Schick et al.,
- ReAct: Synergizing Reasoning and Acting in Language Models, Yao et al.,
- Augmented Language Models: a Survey, Qin et al.,
- Tool Learning with Large Language Models: A Survey, Qu et al.,
Advanced Function Calling Systems
- Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks, Smith et al.,
- HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face, Shen et al.,
- Enhancing Function-Calling Capabilities in LLMs: Strategies for Prompt Formats, Data Integration, and Multilingual Translation, Chen et al.,
Multi-Agent Function Calling
- ToolACE: Winning the Points of LLM Function Calling, Zhang et al.,
- Berkeley Function Leaderboard (BFCL): Evaluating Function-Calling Abilities, Various,
Foundational Long-Context Benchmarks
- RULER: What's the Real Context Size of Your Long-Context Language Models?, Cheng-Ping Hsieh et al.,
- LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding, Bai et al.,
- โBENCH: Extending Long Context Evaluation Beyond 100K Tokens, Zhang et al.,
- VL-ICL Bench: The Devil in the Details of Multimodal In-Context Learning, Zong et al.,
Multimodal and Specialized Evaluation
- MultiModal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models, Wang et al.,
- Contextualized Topic Coherence (CTC) Metrics, Rahimi et al.,
- BBScore: A Brownian Bridge Based Metric for Assessing Text Coherence, Sheng et al.,
RAG and Generation Evaluation
- Evaluation of Retrieval-Augmented Generation: A Survey, Li et al.,
- Ragas: Automated Evaluation of Retrieval Augmented Generation, Espinosa-Anke et al.,
- Human Evaluation Protocol for Generative AI Chatbots in Clinical Microbiology, Griego-Herrera et al.,
Synthetic vs. Realistic Evaluation
- Needle-in-a-Haystack (NIAH) and Synthetic Benchmarks, Research Area 2023-2024,
- ZeroSCROLLS: Realistic Natural Language Tasks, Benchmark 2023-2024,
- InfiniteBench: 100K+ Token Evaluation, Benchmark 2024,
Hypothesis Generation and Data-Driven Discovery
- Hypothesis Generation with Large Language Models, Liu et al.,
- GFlowNets for AI-Driven Scientific Discovery, Jain et al.,
- Literature Meets Data: A Synergistic Approach to Hypothesis Generation, Liu et al.,
- Machine Learning for Hypothesis Generation in Biology and Medicine, FieldSHIFT Team,
Automated Scientific Discovery
- The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery, Lu et al.,
- Automating Psychological Hypothesis Generation with AI, Johnson et al.,
- Can Large Language Models Replace Humans in Systematic Reviews?, Khraisha et al.,
AI for Science Integration and Future Directions
- AI for Science 2025: Convergence of AI Innovation and Scientific Discovery, Fink et al.,
- Towards Scientific Discovery with Generative AI: Progress, Opportunities, and Challenges, Anonymous et al.,
Deep Research Applications
- Accelerating scientific discovery with AI, MIT News,
- Accelerating scientific breakthroughs with an AI co-scientist, Google Research,
- Bridging AI and Science: Implications from a Large-Scale Literature Analysis of AI4Science, Various,
- AI for scientific discovery, World Economic Forum,
Context Engineering as a Core Discipline
- From Prompt Craft to System Design: Context Engineering as a Core Discipline for AI-Driven Delivery, Forte Group Team,
- Context Engineering: A Framework for Enterprise AI Operations, Shelly Palmer,
- How MCP Handles Context Management in High-Throughput Scenarios, Portkey.ai Team,
Enterprise AI Case Studies
- Case Study: JPMorgan's COiN Platform โ Agentic AI for Financial Analysis, AI Mindset Research,
- Case Study: EY's Agentic AI Integration in Microsoft 365 Copilot, AI Mindset Research,
- Context Is Everything: The Massive Shift Making AI Actually Work in the Real World, Phil Mora,
Enterprise Applications and Infrastructure
- The Context Layer for Enterprise RAG Applications, Contextual AI Team,
- Navigating AI Model Deployment: Challenges and Solutions, Dean Lancaster,
- 2024: The State of Generative AI in the Enterprise, Menlo Ventures,
- How 100 Enterprise CIOs Are Building and Buying Gen AI in 2025, Andreessen Horowitz,
- Context Window Constraints: Despite improvements, context length remains a bottleneck
- Computational Overhead: Processing large contexts requires significant resources
- Context Coherence: Maintaining coherence across extended contexts
- Dynamic Adaptation: Real-time context updating challenges
- Infinite Context: Developing truly unlimited context capabilities
- Context Compression: Efficient representation of large contexts
- Multimodal Integration: Seamless integration of diverse data types
- Adaptive Context: Self-optimizing context management
- Context Privacy: Securing sensitive information in context pipelines
We welcome contributions to this survey! Please follow these guidelines:
- Fork the repository
- Create a feature branch
- Add relevant papers with proper formatting
- Submit a pull request with a clear description
<li><i><b>Paper Title</b></i>, Author et al., <a href="URL" target="_blank"><img src="https://img.shields.io/badge/SOURCE-YEAR.MM-COLOR" alt="SOURCE Badge"></a></li>
red
for arXiv papersblue
for conference/journal paperswhite
for GitHub repositoriesyellow
for HuggingFace resources
This project is licensed under the MIT License - see the LICENSE file for details.
For questions, suggestions, or collaboration opportunities, please feel free to reach out:
Lingrui Mei
๐ง Email: meilingrui22@mails.ucas.ac.cn
You can also open an issue in this repository for general discussions and suggestions.
This survey builds upon the foundational work of the AI research community. We thank all researchers contributing to the advancement of context engineering and large language models.
Star โญ this repository if you find it helpful!