WANN-LLM is a novel framework that combines the concepts of Weight Agnostic Neural Networks (WANN) with Large Language Models (LLMs) to create evolving networks of specialized agents. By adapting WANN's key insights - topology evolution and weight-agnostic learning - to the LLM domain, we create robust and efficient agent networks that can tackle complex tasks through emergent cooperation.
- Weight-Agnostic Design: Instead of learned weights, connections represent probabilistic activation paths between specialized LLM agents
- Topology Evolution: Uses NEAT-style evolution to discover optimal agent network structures
- Role Specialization: Each node represents an LLM agent with a specific role template (analogous to activation functions in WANN)
- Resource Efficiency: Optimizes both task performance and computational resource usage
- Flexible Task Support: Includes built-in support for various tasks (math, classification, code review, etc.)
git clone https://github.com/yourusername/wann-llm.git
cd wann-llm
pip install -r requirements.txt
Here's a simple example of solving a math problem:
from wann_llm.core.network import ProbabilisticAgentNetwork
from wann_llm.core.evolution import Evolution, EvolutionMode
from wann_llm.core.experiment import ExperimentRunner
# Create experiment configuration
config = ExperimentConfig(
experiment_name="math_example",
task_type="math",
save_dir="experiments/math_001",
random_seed=42
)
# Initialize and run experiment
runner = ExperimentRunner(config)
await runner.run([
("Calculate 2 + 3 * 4", "14"),
("Solve x: 2x + 5 = 13", "4")
])
Each node in the network is an LLM agent with a specialized role, defined by a prompt template. Roles include:
- Problem analyzers
- Step-by-step solvers
- Solution validators
- Feature extractors
- etc.
Unlike traditional neural networks with learned weights, connections in WANN-LLM represent paths for information flow between agents. The network evolves to find optimal connectivity patterns.
The framework uses NEAT-style evolution to:
- Add/remove connections between agents
- Add new agent nodes with specialized roles
- Optimize network topology for both performance and efficiency
# Example math task configuration
config = {
"name": "math_reasoning",
"type": "qa",
"config": {
"role_templates": {
"analyzer": "Break down complex problems...",
"solver": "Solve step by step...",
"validator": "Verify the solution..."
}
}
}
# Example classification task
config = {
"name": "spam_detection",
"type": "classification",
"config": {
"role_templates": {
"feature_extractor": "Identify key patterns...",
"classifier": "Classify based on features...",
"confidence_estimator": "Assess classification..."
}
}
}
The framework optimizes for multiple objectives:
- Task accuracy
- Token usage efficiency
- Response time
- Error rate
We welcome contributions! Please see our Contributing Guide for details.
If you use WANN-LLM in your research, please cite:
@article{wannllm2024,
title={WANN-LLM: Weight Agnostic Neural Networks for LLM Agents},
author={Your Name},
year={2024}
}
This project is licensed under the MIT License - see the LICENSE file for details.