Skip to content

An implementation of a multi-agent task management system that enables hierarchical agent coordination and task execution.

License

Notifications You must be signed in to change notification settings

i-am-bee/beekeeper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

57 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Beekeeper

Orchestrate multi-agent systems through a central supervisor agent and conversational interface

Alpha Apache 2.0 Follow on Bluesky Join our Discord LF AI & Data

Overview - Key Features - Installation - Quickstart - Contribute

Beekeeper

Overview

Beekeeper is an experimental multi-agent orchestration system built on the BeeAI framework. It enables users to manage, supervise, and scale AI agents through a conversational interface without requiring manual configuration from scratch.

At the heart of Beekeeper is a supervisor agent, responsible for orchestrating specialized agents to achieve specific objectives. Its modular architecture enables the system to dynamically allocate resources and coordinate task execution. Just describe your objective, and Beekeeper coordinates specialized agents to get it done.

πŸŽ₯ See Beekeeper in action by watching the demo video!

Core components

At its core, Beekeeper consists of three primary components:

  1. Supervision: A central supervisor agent oversees and coordinates multiple AI agents.
  2. Agent registry: A centralized repository of available agents.
  3. Task management: Manages and executes complex tasks, breaking them down into smaller sub-tasks.

Key features

Beekeeper offers several key features:

  • πŸ”„ Iterative development: Continuously refine agents and tasks by giving feedback to the supervisor agent.
  • πŸ“ Workspace persistence: Save and reuse configurations for efficiency and consistency.
  • πŸš€ Parallel scalability: Run multiple agents simultaneously for complex tasks.
  • πŸ–₯️ Unified interface: Manage all AI agents from one central hub.
  • πŸ“‘ Active monitoring: Get real-time insights to detect and fix issues quickly.

Installation

Note

Mise is used to manage tool versions (python, uv, nodejs, pnpm...), run tasks, and handle environments by automatically downloading required tools.

Clone the project, then run:

brew install mise  # more ways to install: https://mise.jdx.dev/installing-mise.html
mise trust
mise install
mise build

Environment setup

Mise generates a .env file using the .env.template in the project root.

1. Set your LLM provider

OpenAI (Recommended)
# LLM Provider (ollama/openai)
LLM_BACKEND="openai"

## OpenAI
OPENAI_API_KEY="<YOUR_OPEN_AI_API_KEY_HERE>"
OPENAI_MODEL_SUPERVISOR="gpt-4o"
OPENAI_MODEL_OPERATOR="gpt-4o"
Ollama
# LLM Provider (ollama/openai)
LLM_BACKEND="ollama"

## Ollama
OLLAMA_BASE_URL="http://0.0.0.0:11434/api"
OLLAMA_MODEL_SUPERVISOR="deepseek-r1:8b"
OLLAMA_MODEL_OPERATOR="deepseek-r1:8b"

Important Note: When using Ollama, ensure your model supports tool calling. Smaller models may lead to frequent incorrect tool calls. For stability, use a larger model like qwq:32b.

Combination of OpenAI / Ollama
# LLM Provider (ollama/openai)
LLM_BACKEND_SUPERVISOR="openai"
LLM_BACKEND_OPERATOR="ollama"

## OpenAI
OPENAI_API_KEY="<YOUR_OPEN_AI_API_KEY_HERE>"
OPENAI_MODEL_SUPERVISOR="gpt-4o"

## Ollama
OLLAMA_BASE_URL="http://0.0.0.0:11434/api"
OLLAMA_MODEL_OPERATOR="deepseek-r1:8b"

2. Set your search tool

Tavily (Recommended)

Tavily offers 1,000 free API credits/month without a credit card. Get your API key from Tavily Quickstart.

# Tools
SEARCH_TOOL="tavily"
TAVILY_API_KEY="<YOUR_TAVILY_API_KEY_HERE>"
DuckDuckGo
# Tools
SEARCH_TOOL="duckduckgo"

Quickstart

Here’s how to spin up your first multi-agent system:

Step Command Description
1 Run:
WORKSPACE=trip_planner mise interactive
Launch the interactive UI and create a new workspace. Use interactive mode when building your system.
2 Split the terminal, then run:
mise monitor
Watch live task execution and agent logs.
3 Input the following prompt:
I'm heading to Boston next week and need help planning a simple 3-day itinerary. I’ll be staying in Back Bay and want to see historical sites, catch a hockey or basketball game, and enjoy great food. Can you recommend one dinner spot each night - Italian, Chinese, and French?
The supervisor will break this down into subtasks and automatically configure agents.
4 Modify an existing agent:
Can you change the instructions of the restaurant agent to only suggest restaurants that offer gluten free?
Ask the supervisor to change or constrain behavior of an agent.
5 Close out of the session (esc 2x, click yes) and start fresh:
WORKSPACE=trip_planner mise autonomous <<< "I'm traveling to Boston MA next week for 3 days. Create a 5-day itinerary with some excellent restaurant and sports game recommendations."
All tasks and agents are preserved in output/workspaces/trip_planner. Once your system is set up, use autonomous mode for one-shot execution.

You've just spun up and evolved your first multi-agent system with Beekeeper πŸ‘

Now you're ready to iterate, expand, or even create something entirely new!


Interaction modes

The system operates in two modes: Interactive and Autonomous.

Interactive mode

Engage with the supervisor agent in real time via the Chat UI.

To start, run:

mise interactive

Use this mode when you want to:

  • 🧭 Define goals - Get real-time guidance
  • πŸŽ›οΈ Tune settings - Adjust agents and tasks as you go
  • πŸ› οΈ Modify live β€” Pause, tweak, or stop tasks mid-run

Tip

Monitor everything in another terminal: mise monitor.

Important

To avoid losing your work, always define a workspace: WORKSPACE=trip_planner mise interactive.

Autonomous mode

Execute tasks independently, ideal for batch jobs or one-off requests.

To start, run:

mise autonomous <<< "Hi, can you create a poem about each of these topics: bee, hive, queen, sun, flowers?"

In this mode:

  • ⚑ One command, one result
  • πŸ‘ Zero interaction needed
  • πŸ’€ Auto-shutdown after execution

Tip

Monitor everything in another terminal: mise monitor.

Important

To avoid losing your work, always define a workspace: WORKSPACE=your_workspace mise autonomous <<< "your_prompt".


Workspaces

Workspaces provide a persistence layer for your agent and task configurations, optimizing resource use. With workspaces, you can:

  1. Retain configurations across sessions, eliminating the need to rebuild setups.
  2. Iterate and refine configurations for improved performance.
  3. Ensure consistent processing while reducing token costs.

Once fine-tuned, configurations can be easily reused, making workflows more efficient.

Workspace directory

Workspaces are stored in the ./outputs/workspaces folder.

Creating or switching workspaces

To create or switch to a different workspace, set the WORKSPACE variable when launching your session:

WORKSPACE=my_workspace mise interactive

Documentation

🚧 Comprehensive documentation is under construction. Any questions, reach out to us on Discord!


Contribute

We're passionate about building a better Beekeeper, and we couldn't do it without your help! Our project is open-source and community-driven.

  • Want to share an idea or have a question? Reach out to us on Discord.
  • Find a bug or have a feature request? Open an issue.
  • Want to contribute? Check out our contribution guidelines.

We appreciate all types of contributions!

Maintainers

For information about maintainers, see MAINTAINERS.md.

Code of conduct

This project and everyone participating in it are governed by the Code of Conduct. By participating, you are expected to uphold this code. Please read the full text so that you know which actions may or may not be tolerated.

Legal notice

All content in these repositories including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.


Developed by contributors to the BeeAI project, this initiative is part of the Linux Foundation AI & Data program. Its development follows open, collaborative, and community-driven practices.

About

An implementation of a multi-agent task management system that enables hierarchical agent coordination and task execution.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •