Skip to content

Wenbobobo/AgentWANN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WANN-LLM: Weight Agnostic Neural Networks for LLM Agents

Overview

WANN-LLM is a novel framework that combines the concepts of Weight Agnostic Neural Networks (WANN) with Large Language Models (LLMs) to create evolving networks of specialized agents. By adapting WANN's key insights - topology evolution and weight-agnostic learning - to the LLM domain, we create robust and efficient agent networks that can tackle complex tasks through emergent cooperation.

Key Features

  • Weight-Agnostic Design: Instead of learned weights, connections represent probabilistic activation paths between specialized LLM agents
  • Topology Evolution: Uses NEAT-style evolution to discover optimal agent network structures
  • Role Specialization: Each node represents an LLM agent with a specific role template (analogous to activation functions in WANN)
  • Resource Efficiency: Optimizes both task performance and computational resource usage
  • Flexible Task Support: Includes built-in support for various tasks (math, classification, code review, etc.)

Installation

git clone https://github.com/yourusername/wann-llm.git
cd wann-llm
pip install -r requirements.txt

Quick Start

Here's a simple example of solving a math problem:

from wann_llm.core.network import ProbabilisticAgentNetwork
from wann_llm.core.evolution import Evolution, EvolutionMode
from wann_llm.core.experiment import ExperimentRunner

# Create experiment configuration
config = ExperimentConfig(
    experiment_name="math_example",
    task_type="math",
    save_dir="experiments/math_001",
    random_seed=42
)

# Initialize and run experiment
runner = ExperimentRunner(config)
await runner.run([
    ("Calculate 2 + 3 * 4", "14"),
    ("Solve x: 2x + 5 = 13", "4")
])

Key Concepts

1. Agent Nodes

Each node in the network is an LLM agent with a specialized role, defined by a prompt template. Roles include:

  • Problem analyzers
  • Step-by-step solvers
  • Solution validators
  • Feature extractors
  • etc.

2. Probabilistic Connections

Unlike traditional neural networks with learned weights, connections in WANN-LLM represent paths for information flow between agents. The network evolves to find optimal connectivity patterns.

3. Evolution Process

The framework uses NEAT-style evolution to:

  • Add/remove connections between agents
  • Add new agent nodes with specialized roles
  • Optimize network topology for both performance and efficiency

Task Examples

Math Problem Solving

# Example math task configuration
config = {
    "name": "math_reasoning",
    "type": "qa",
    "config": {
        "role_templates": {
            "analyzer": "Break down complex problems...",
            "solver": "Solve step by step...",
            "validator": "Verify the solution..."
        }
    }
}

Spam Detection

# Example classification task
config = {
    "name": "spam_detection",
    "type": "classification",
    "config": {
        "role_templates": {
            "feature_extractor": "Identify key patterns...",
            "classifier": "Classify based on features...",
            "confidence_estimator": "Assess classification..."
        }
    }
}

Performance & Resource Usage

The framework optimizes for multiple objectives:

  • Task accuracy
  • Token usage efficiency
  • Response time
  • Error rate

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Citation

If you use WANN-LLM in your research, please cite:

@article{wannllm2024,
    title={WANN-LLM: Weight Agnostic Neural Networks for LLM Agents},
    author={Your Name},
    year={2024}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages