Skip to content

A Python framework that emulates Grok Heavy functionality using intelligent multi-agent orchestration. Deploy 4 (or more) specialized AI agents in parallel to deliver comprehensive, multi-perspective analysis on any query.

License

Notifications You must be signed in to change notification settings

LuisTellezSirocco/make-it-heavy

ย 
ย 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

5 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿš€ Make It heavy

A Python framework to emulate Grok heavy functionality using a powerful multi-agent system. Built on OpenRouter's API, Make It heavy delivers comprehensive, multi-perspective analysis through intelligent agent orchestration.

๐ŸŒŸ Features

  • ๐Ÿง  Grok heavy Emulation: Multi-agent system that delivers deep, comprehensive analysis like Grok heavy mode
  • ๐Ÿ”€ Parallel Intelligence: Deploy 4 specialized agents simultaneously for maximum insight coverage
  • ๐ŸŽฏ Dynamic Question Generation: AI creates custom research questions tailored to each query
  • โšก Real-time Orchestration: Live visual feedback during multi-agent execution
  • ๐Ÿ› ๏ธ Hot-Swappable Tools: Automatically discovers and loads tools from the tools/ directory
  • ๐Ÿ”„ Intelligent Synthesis: Combines multiple agent perspectives into unified, comprehensive answers
  • ๐ŸŽฎ Single Agent Mode: Run individual agents for simpler tasks with full tool access

๐Ÿš€ Quick Start

Prerequisites

  • Python 3.8+
  • uv (recommended Python package manager)
  • OpenRouter API key

Installation

  1. Clone and setup environment:
git clone <https://github.com/Doriandarko/make-it-heavy.git>
cd "make it heavy"

# Create virtual environment with uv
uv venv

# Activate virtual environment
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
  1. Install dependencies:
uv pip install -r requirements.txt
  1. Configure API key:
# Edit config.yaml and replace YOUR API KEY HERE with your OpenRouter API key

๐ŸŽฏ Usage

Single Agent Mode

Run a single intelligent agent with full tool access:

uv run main.py

What it does:

  • Loads a single agent with all available tools
  • Processes your query step-by-step
  • Uses tools like web search, calculator, file operations
  • Returns comprehensive response when task is complete

Example:

User: Research the latest developments in AI and summarize them
Agent: [Uses search tool, analyzes results, provides summary]

Grok heavy Mode (Multi-Agent Orchestration)

Emulate Grok heavy's deep analysis with 4 parallel intelligent agents:

uv run make_it_heavy.py

How Make It heavy works:

  1. ๐ŸŽฏ AI Question Generation: Creates 4 specialized research questions from your query
  2. ๐Ÿ”€ Parallel Intelligence: Runs 4 agents simultaneously with different analytical perspectives
  3. โšก Live Progress: Shows real-time agent status with visual progress bars
  4. ๐Ÿ”„ Intelligent Synthesis: Combines all perspectives into one comprehensive Grok heavy-style answer

Example Flow:

User Query: "Who is Pietro Schirano?"

AI Generated Questions:
- Agent 1: "Research Pietro Schirano's professional background and career history"
- Agent 2: "Analyze Pietro Schirano's achievements and contributions to technology"  
- Agent 3: "Find alternative perspectives on Pietro Schirano's work and impact"
- Agent 4: "Verify and cross-check information about Pietro Schirano's current role"

Result: Grok heavy-style comprehensive analysis combining all agent perspectives

๐Ÿ—๏ธ Architecture

Orchestration Flow

graph TD
    A[User Input] --> B[Question Generation Agent]
    B --> C[Generate 4 Specialized Questions]
    C --> D[Parallel Agent Execution]
    D --> E[Agent 1: Research]
    D --> F[Agent 2: Analysis] 
    D --> G[Agent 3: Alternatives]
    D --> H[Agent 4: Verification]
    E --> I[Synthesis Agent]
    F --> I
    G --> I
    H --> I
    I --> J[Comprehensive Final Answer]
Loading

Core Components

1. Agent System (agent.py)

  • Self-contained: Complete agent implementation with tool access
  • Agentic Loop: Continues working until task completion
  • Tool Integration: Automatic tool discovery and execution
  • Configurable: Uses config.yaml for all settings

2. Orchestrator (orchestrator.py)

  • Dynamic Question Generation: AI creates specialized questions
  • Parallel Execution: Runs multiple agents simultaneously
  • Response Synthesis: AI combines all agent outputs
  • Error Handling: Graceful fallbacks and error recovery

3. Tool System (tools/)

  • Auto-Discovery: Automatically loads all tools from directory
  • Hot-Swappable: Add new tools by dropping files in tools/
  • Standardized Interface: All tools inherit from BaseTool

Available Tools

Tool Purpose Parameters
search_web Web search with DuckDuckGo query, max_results
calculate Safe mathematical calculations expression
read_file Read file contents path, head, tail
write_file Create/overwrite files path, content
mark_task_complete Signal task completion task_summary, completion_message

โš™๏ธ Configuration

Edit config.yaml to customize behavior:

# OpenRouter API settings
openrouter:
  api_key: "YOUR KEY"
  base_url: "https://openrouter.ai/api/v1"
  model: "openai/gpt-4.1-mini"  # Change model here

# Agent settings
agent:
  max_iterations: 10

# Orchestrator settings
orchestrator:
  parallel_agents: 4  # Number of parallel agents
  task_timeout: 300   # Timeout per agent (seconds)
  
  # Dynamic question generation prompt
  question_generation_prompt: |
    You are an orchestrator that needs to create {num_agents} different questions...
    
  # Response synthesis prompt  
  synthesis_prompt: |
    You have {num_responses} different AI agents that analyzed the same query...

# Tool settings
search:
  max_results: 5
  user_agent: "Mozilla/5.0 (compatible; OpenRouter Agent)"

๐Ÿ”ง Development

Adding New Tools

  1. Create a new file in tools/ directory
  2. Inherit from BaseTool
  3. Implement required methods:
from .base_tool import BaseTool

class MyCustomTool(BaseTool):
    @property
    def name(self) -> str:
        return "my_tool"
    
    @property
    def description(self) -> str:
        return "Description of what this tool does"
    
    @property
    def parameters(self) -> dict:
        return {
            "type": "object",
            "properties": {
                "param": {"type": "string", "description": "Parameter description"}
            },
            "required": ["param"]
        }
    
    def execute(self, param: str) -> dict:
        # Tool implementation
        return {"result": "success"}
  1. The tool will be automatically discovered and loaded!

Customizing Models

Supports any OpenRouter-compatible model:

openrouter:
  model: "anthropic/claude-3.5-sonnet"     # For complex reasoning
  model: "openai/gpt-4.1-mini"             # For cost efficiency  
  model: "google/gemini-2.0-flash-001"     # For speed
  model: "meta-llama/llama-3.1-70b"        # For open source

Adjusting Agent Count

Change number of parallel agents:

orchestrator:
  parallel_agents: 6  # Run 6 agents instead of 4

Note: Make sure your OpenRouter plan supports the concurrent usage!

๐ŸŽฎ Examples

Research Query

User: "Analyze the impact of AI on software development in 2024"

Single Agent: Comprehensive research report
Grok heavy Mode: 4 specialized perspectives combined into deep, multi-faceted analysis

Technical Question

User: "How do I optimize a React application for performance?"

Single Agent: Step-by-step optimization guide
Grok heavy Mode: Research + Analysis + Alternatives + Verification = Complete expert guide

Creative Task

User: "Create a business plan for an AI startup"

Single Agent: Structured business plan
Grok heavy Mode: Market research + Financial analysis + Competitive landscape + Risk assessment

๐Ÿ› ๏ธ Troubleshooting

Common Issues

API Key Error:

Error: Invalid API key
Solution: Update config.yaml with valid OpenRouter API key

Tool Import Error:

Error: Could not load tool from filename.py
Solution: Check tool inherits from BaseTool and implements required methods

Synthesis Failure:

๐Ÿšจ SYNTHESIS FAILED: [error message]
Solution: Check model compatibility and API limits

Timeout Issues:

Agent timeout errors
Solution: Increase task_timeout in config.yaml

Debug Mode

For detailed debugging, modify orchestrator to show synthesis process:

# In orchestrator.py
synthesis_agent = OpenRouterAgent(silent=False)  # Enable debug output

๐Ÿ“ Project Structure

make it heavy/
โ”œโ”€โ”€ main.py                 # Single agent CLI
โ”œโ”€โ”€ make_it_heavy.py         # Multi-agent orchestrator CLI  
โ”œโ”€โ”€ agent.py                # Core agent implementation
โ”œโ”€โ”€ orchestrator.py         # Multi-agent orchestration logic
โ”œโ”€โ”€ config.yaml             # Configuration file
โ”œโ”€โ”€ requirements.txt        # Python dependencies
โ”œโ”€โ”€ README.md               # This file
โ””โ”€โ”€ tools/                  # Tool system
    โ”œโ”€โ”€ __init__.py         # Auto-discovery system
    โ”œโ”€โ”€ base_tool.py        # Tool base class
    โ”œโ”€โ”€ search_tool.py      # Web search
    โ”œโ”€โ”€ calculator_tool.py  # Math calculations  
    โ”œโ”€โ”€ read_file_tool.py   # File reading
    โ”œโ”€โ”€ write_file_tool.py  # File writing
    โ””โ”€โ”€ task_done_tool.py   # Task completion

๐Ÿค Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Add new tools or improve existing functionality
  4. Test with both single and multi-agent modes
  5. Submit a pull request

๐Ÿ“ License

MIT License with Commercial Attribution Requirement

For products with 100K+ users: Please include attribution to Pietro Schirano and mention the "Make It heavy" framework in your documentation or credits.

See LICENSE file for full details.

๐Ÿ™ Acknowledgments

  • Built with OpenRouter for LLM API access
  • Uses uv for Python package management
  • Inspired by Grok heavy mode and advanced multi-agent AI systems

Ready to make it heavy? ๐Ÿš€

uv run make_it_heavy.py

Star History

Star History Chart

About

A Python framework that emulates Grok Heavy functionality using intelligent multi-agent orchestration. Deploy 4 (or more) specialized AI agents in parallel to deliver comprehensive, multi-perspective analysis on any query.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%