A framework for automated code ablation studies using LLM agents. This project helps analyze the importance of different components in neural network architectures through systematic removal and testing.
Agentic Ablation uses a multi-agent workflow to automatically:
- Analyze code with neural network architectures
- Generate ablated versions (with specific components removed)
- Test the modified code to ensure it remains functional
- Analyze the impact of removals on model performance
- Automated Ablation: Identifies components marked with
#ABLATABLE_COMPONENT
comments - Multi-Agent System: Specialized agents for code generation, execution, reflection, and analysis
- Failure Recovery: Built-in reflection and retry mechanisms for robust execution
- Visualization: Generates comparison plots between original and ablated models
- Result Analysis: Provides detailed insights on the impact of ablated components
- Python 3.13+
- OpenAI API key (for LLM agents)
# Clone the repository
git clone https://github.com/yourusername/agentic-ablation.git
cd agentic-ablation
# Install dependencies with uv (using pyproject.toml)
uv sync
- Mark ablatable components in your neural network code:
self.conv2 = nn.Conv2d(32, 64, kernel_size=3, padding=1) #ABLATABLE_COMPONENT
- Run the ablation study:
make run-agent
This will use the uv run
command defined in the Makefile.
- View results in the generated JSON files and PDF reports.
The framework is organized into specialized modules:
agents/
: Implementation of each specialized agentmodels/
: Data schemas for code and analysisworkflow/
: LangGraph-based workflow configurationutils/
: Helper functions for file operationsprompts/
: LLM prompts for each agent
MIT