Ebiose is a distributed artificial intelligence factory, an open source project from the Inriaβs incubator (French lab). Our vision: enabling humans and agents to collaborate in building tomorrow's AI in an open and democratic way.
"AI can just as easily become the weapon of a surveillance capitalism dystopia as the foundation of a democratic renaissance."
π Must read π
- Founding blog post (10 min)
- Glossary (3 min)
This first beta version implements the foundations of our vision.
- Architect agents: Specialized AIs for designing and evolving other agents
- Darwinian engine: Evolutionary system enabling continuous improvement of agents through mutation and selection
- Forges: Isolated environments where architect agents create custom agents to solve specific problems
- LangGraph Compatibility: Integration with the LangGraph ecosystem for agent orchestration
With the latest release (June 2025):
- A shared centralized ecosystem: Use Ebioseβs cloud to kickstart a forge cycle with curated agents from our shared ecosystem. The top-performing agents are automatically promoted and reintegrated, making the ecosystem stronger with every cycle. π [Access the Ebiose cloud now.]
- LiteLLM support: Ebiose now integrates with LiteLLM to simplify the management of your own LLMs.
- Proof of concept: Don't expect complex or production-ready agents
- Initial architect agent to be improved: The first implemented architect agent is still simple
- Early stage: Be prepared to work through initial issues and contribute to improvements! π
First, clone the repository:
git clone git@github.com:ebiose-ai/ebiose.git && cd ebiose
Initialize the project by running the following command:
make init
This command will perform the following actions:
- Copy the
model_endpoints_template.yml
file tomodel_endpoints.yml
if the file doesn't exist, and instruct you to fill it with your API keys. - Copy the
.env.example
file to.env
if the file doesn't exist.
There are two ways to start running Ebiose:
-
the most straightforward way is to use Docker: go to section π³ With Docker;π§ Docker support for the new release is currently untested. See Issue #26 for details. -
if you are not yet confortable with Ebiose and wish to understand the basics of Ebiose step by step, you may also install the project dependencies and go through the
quickstart.ipynb
Jupyter notebook to understand the basics of Ebiose, step by step; follow the steps to install Ebiose π» Locally.
Ebiose uses uv as a packaging and dependency manager. See Astral's uv documentation to install it.
Once uv is installed, use it to install your project dependencies. In your project directory, run:
To install all required dependencies
uv sync
By default, Ebiose supports OpenAI models but other major providers can also be used. Refer to π€ Model APIs support
For more detailed instructions or troubleshooting tips, refer to the official uv documentation.
π‘ If you don't want to use
uv
, you can still usepip install -r requirements.txt
command.
π‘ Pro Tip: You may need to add the root of the repository to your
PYTHONPATH
environment variable. Alternatively, use a.env
file to do so.
The Jupyter notebook quickstart.ipynb
is the easiest way to understand the basics and start experimenting with Ebiose. This notebook lets you try out architect agents and forges on your very own challenges. π€
To go further, the examples/
directory features a complete forge example designed to optimize agents that solve math problems. Check out examples/math_forge/math_forge.py
for the implementation of the MathLangGraphForge
forge.
For demonstration purposes, the run.py
script is configured to manage a forge cycle with only two agents per generation, using a tiny budget of $0.02. The cycle should take 1 to 2 minutes to consume the budget using the default model endpoint gpt-4o-mini
. Each generated agent will be
evaluated on 5 math problems from GSM-8k test dataset.
To run a cycle of the Math forge, execute the following command in your project directory:
uv run ./examples/math_forge/run.py
Kick off your journey by implementing your own forge with the accompanying compute_fitness
method! π
As of today, Ebiose uses LangChain/LangGraph to implement agents. Using the different providers of LLMs, and ML models, has been made as easy as possible.
Since June 2025, Ebiose has been integrated with LiteLLM and now offers its own cloud β making model management even easier.
The fastest and easiest way to run your forge in just a few steps with $10 free credits.
Sign up at Ebiose Cloud.
Generate your Ebiose API key and add it to your model_endpoints.yml
file:
ebiose:
api_key: "your-ebiose-api-key" # Replace with your Ebiose API key
api_base: "https://cloud.ebiose.com/"
Specify the model to use by default:
default_endpoint_id: "azure/gpt-4o-mini"
π§ As of June 2025, the Ebiose web app only allows you to create an API key with $10 in free credits to experiment with running your own forges. More features coming soon.
π¨ To run a forge cycle with Ebiose cloud, be sure to set it up using the dedicated
CloudForgeCycleConfig
class.
Ebiose Cloud currently supports the following models:
azure/gpt-4o-mini
azure/gpt-4.1-mini
azure/gpt-4.1-nano
azure/gpt-4o
More models to come. Feel free to ask.
Ebiose integrates with LiteLLM, either through the cloud or a self-hosted proxy.
Refer to the LiteLLM documentation to get started and generate your LiteLLM API key.
Once you have your key, update the model_endpoints.yml
file as follows:
lite_llm:
use: true # Set to true to enable LiteLLM
use_proxy: false # Set to true if using a self-hosted LiteLLM proxy
api_key: "your-litellm-api-key" # Replace with your LiteLLM API key
api_base: "your-litellm-proxy-url" # Optional: your LiteLLM proxy URL
Finally, define your LiteLLM endpoints using the appropriate model naming format:
endpoints:
- endpoint_id: "azure/gpt-4o-mini"
provider: "Azure OpenAI"
π¨ To run a forge cycle without Ebiose cloud, be sure to set it up using the dedicated
LocalForgeCycleConfig
class.
π¨ The "local" mode for running forge cycles has not been fully tested. Use with caution and report any issues. See Issue #29 for details.
You may also use your own credentials without going through LiteLLM.
To do so, define the model endpoints you want to use in the model_endpoints.yml
file located at the root of the project.
Fill in your secret credentials using the examples below.
For other providers not listed here, refer to LangChain's documentation
and adapt the LangGraphLLMApi
class as needed.
Issues and pull requests are welcome!
π¨ To run a forge cycle without Ebiose cloud, be sure to set it up using the dedicated
LocalForgeCycleConfig
class.
π¨ The "local" mode for running forge cycles has not been fully tested. Use with caution and report any issues. See Issue #29 for details.
To use OpenAI LLMs, fill the model_endpoints.yml
file at the root of the project,
with, for example:
default_endpoint_id: "gpt-4o-mini"
endpoints:
- endpoint_id: "gpt-4o-mini"
provider: "OpenAI"
api_key: "YOUR_OPENAI_API_KEY"
To use OpenAI LLMs on Azure, fill the model_endpoints.yml
file at the root of the project, with, for example:
endpoints:
- endpoint_id: "azure/gpt-4o-mini"
provider: "Azure OpenAI"
api_key: "YOUR_AZURE_OPENAI_API_KEY"
endpoint_url: "AZURE_OPENAI_ENDPOINT_URL"
api_version: "API_VERSION"
deployment_name: "DEPLOYMENT_NAME"
To use other LLMs hosted on Azure fill the model_endpoints.yml
file at the root
of the project, with, for example:
endpoints:
- endpoint_id: "llama3-8b"
provider: "Azure ML"
api_key: "YOUR_AZURE_ML_API_KEY"
endpoint_url: "AZURE_ENDPOINT_URL"
To use Anthropic LLMs, fill the model_endpoints.yml
file at the root of the project,
with, for example:
endpoints:
- endpoint_id: "claude-3-sonnet-20240229"
provider: "Anthropic"
api_key: "YOUR_OPENAI_API_KEY"
π¨ Dont'forget to install Langchain's Anthropic library by executing
uv sync --extra anthropic
orpip install -U langchain-anthropic
To use HuggingFace LLMs, fill the model_endpoints.yml
file at the root of the project, with, for example:
endpoints:
- endpoint_id: "microsoft/Phi-3-mini-4k-instruct"
provider: "Hugging Face"
π¨ Dont'forget to install Langchain's Hugging Face library by executing
uv sync --extra huggingface
orpip install -U langchain-huggingface
and login with the following:
from huggingface_hub import login
login()
To use OpenRouter LLMs, fill the model_endpoints.yml
file at the root of the project, with, for example:
endpoints:
- endpoint_id: "openrouter/quasar-alpha"
provider: "OpenRouter"
api_key: "YOUR_OPENROUTER_API_KEY" # Fill in your OpenRouter API key
endpoint_url: "https://openrouter.ai/api/v1" # OpenRouter API endpoint URL
It needs openai library which is installed by default.
To use Google LLMs, fill the model_endpoints.yml
file at the root of the project, with, for example:
endpoints:
- endpoint_id: "gemini-2.5-pro-exp-03-25"
provider: "Google"
api_key: "YOUR_GOOGLE_API_KEY" # Fill in your Google API key
π¨ Don't forget to install Langchain's Google GenAI library by executing
uv sync --extra google
orpip install langchain-google-genai
.
To use Ollama LLMs, fill the model_endpoints.yml
file at the root of the project, with, for example:
endpoints:
- endpoint_id: "ModelName" # Replace with the actual model name, e.g., "llama3-8b"
provider: "Ollama"
endpoint_url: "http://<Ollama host IP>:11434/v1"
π¨ Don't forget to install Langchain's Ollama library by executing
uv sync --extra ollama
orpip install langchain-ollama
Again, we wish to be compatible with every provider you are used to, so feel free to open an issue and contribute to expanding our LLMs' coverage. Check first if LangChain is compatible with your preferred provider here.
π¨ Langfuse Version Warning: Ebiose currently uses Langfuse version 2.x.x. Updating to Langfuse 3.x.x is planned but not yet implemented due to compatibility issues. See Issue #28 for details.
Ebiose uses Langfuse's @observe
decorator to be able to observe nested agent's traces.
LangFuse can be easily self-hosted.
See Langfuse's documentation to do so.
Once Langfuse's server is running, you can set Langfuse credentials in your
.env
file by adding:
# Langfuse credentials
LANGFUSE_SECRET_KEY="your_langfuse_secret_key"
LANGFUSE_PUBLIC_KEY="your_langfuse_public_key"
LANGFUSE_HOST="your_langfuse_host"
Ebiose uses Loguru for logging purpose. You have nothing to do to set it up but can adapt logs to your needs easily.
Here are some common issues users might face and their solutions:
Solution: Ensure uv
is installed correctly. Follow the
official installation guide. Alternatively, use pip
:
pip install -r requirements.txt
Solution: Use a virtual environment to isolate dependencies:
python -m venv ebiose-env
source ebiose-env/bin/activate # On Windows: ebiose-env\Scripts\activate
uv sync # or pip install -r requirements.txt
Solution: Ensure your API keys are set in the model_endpoints.yml
file, for example:
endpoints:
# OpenAI endpoints
- endpoint_id: "gpt-4o-mini"
provider: "OpenAI"
api_key: "YOUR_OPENAI_API_KEY" # fill in your OpenAI API key
Solution: Ensure Jupyter is installed and the kernel is set correctly:
pip install notebook
jupyter notebook
Solution: Set the .env
PYTHONPATH variable as shown in the .env.example
file. Alternatively, add the project root to your PYTHONPATH:
export PYTHONPATH=$PYTHONPATH:$(pwd)
We are committed to fostering a welcoming and inclusive community. Please read our Code of Conduct before participating.
We welcome contributions from the community! Here's how you can help:
- Report Bugs: Open an issue on GitHub with detailed steps to reproduce the problem.
- Suggest Features: Share your ideas for new features or improvements.
- Submit Pull Requests: Fork the repository, make your changes, and submit a PR. Please follow our contribution guidelines.
For more details, check out our Contribution Guide.
Ebiose is licensed under the MIT License. This means you're free to use, modify, and distribute the code, as long as you include the original license.
If you have any questions or need help, feel free to:
- Open an issue on GitHub.
- Join our Discord server.
- Reach out to the maintainers directly.
All feedback is highly appreciated. Thanks! π