Skip to content

krishvsoni/ebiose

Β 
Β 

Repository files navigation

Autonomous AI Agents that Self-Evolve

Website

Discord

GitHub Repo stars License

Copyright Β© 2025 Inria

Ebiose is a distributed artificial intelligence factory, an open source project from the Inria’s incubator (French lab). Our vision: enabling humans and agents to collaborate in building tomorrow's AI in an open and democratic way.

"AI can just as easily become the weapon of a surveillance capitalism dystopia as the foundation of a democratic renaissance."

πŸ‘€ Must read πŸ‘€

πŸ§ͺ Current status: Beta 0.1

This first beta version implements the foundations of our vision.

βœ… What's included

  • Architect agents: Specialized AIs for designing and evolving other agents
  • Darwinian engine: Evolutionary system enabling continuous improvement of agents through mutation and selection
  • Forges: Isolated environments where architect agents create custom agents to solve specific problems
  • LangGraph Compatibility: Integration with the LangGraph ecosystem for agent orchestration

With the latest release (June 2025):

  • A shared centralized ecosystem: Use Ebiose’s cloud to kickstart a forge cycle with curated agents from our shared ecosystem. The top-performing agents are automatically promoted and reintegrated, making the ecosystem stronger with every cycle. πŸ‘‰ [Access the Ebiose cloud now.]
  • LiteLLM support: Ebiose now integrates with LiteLLM to simplify the management of your own LLMs.

🚨 Points of caution

  • Proof of concept: Don't expect complex or production-ready agents
  • Initial architect agent to be improved: The first implemented architect agent is still simple
  • Early stage: Be prepared to work through initial issues and contribute to improvements! πŸ˜‡

πŸš€ Quick start

πŸ”§ Installation

First, clone the repository:

git clone git@github.com:ebiose-ai/ebiose.git && cd ebiose

βš™οΈ Initialization

Initialize the project by running the following command:

make init

This command will perform the following actions:

  • Copy the model_endpoints_template.yml file to model_endpoints.yml if the file doesn't exist, and instruct you to fill it with your API keys.
  • Copy the .env.example file to .env if the file doesn't exist.

πŸ”₯ Run your first Ebiose forge cycle

There are two ways to start running Ebiose:

  • the most straightforward way is to use Docker: go to section 🐳 With Docker; 🚧 Docker support for the new release is currently untested. See Issue #26 for details.

  • if you are not yet confortable with Ebiose and wish to understand the basics of Ebiose step by step, you may also install the project dependencies and go through the quickstart.ipynb Jupyter notebook to understand the basics of Ebiose, step by step; follow the steps to install Ebiose πŸ’» Locally.

πŸ’» Locally

πŸ“¦ Install Project Dependencies

Ebiose uses uv as a packaging and dependency manager. See Astral's uv documentation to install it.

Once uv is installed, use it to install your project dependencies. In your project directory, run:

To install all required dependencies

uv sync

By default, Ebiose supports OpenAI models but other major providers can also be used. Refer to πŸ€– Model APIs support

For more detailed instructions or troubleshooting tips, refer to the official uv documentation.

πŸ’‘ If you don't want to use uv, you can still use pip install -r requirements.txt command.

πŸ’‘ Pro Tip: You may need to add the root of the repository to your PYTHONPATH environment variable. Alternatively, use a .env file to do so.

πŸ” Understand forges and forge cycles

The Jupyter notebook quickstart.ipynb is the easiest way to understand the basics and start experimenting with Ebiose. This notebook lets you try out architect agents and forges on your very own challenges. πŸ€“

πŸ› οΈ Implement your own forge

To go further, the examples/ directory features a complete forge example designed to optimize agents that solve math problems. Check out examples/math_forge/math_forge.py for the implementation of the MathLangGraphForge forge.

For demonstration purposes, the run.py script is configured to manage a forge cycle with only two agents per generation, using a tiny budget of $0.02. The cycle should take 1 to 2 minutes to consume the budget using the default model endpoint gpt-4o-mini. Each generated agent will be evaluated on 5 math problems from GSM-8k test dataset.

To run a cycle of the Math forge, execute the following command in your project directory:

uv run ./examples/math_forge/run.py

Kick off your journey by implementing your own forge with the accompanying compute_fitness method! πŸŽ‰

πŸ€– Model APIs support

As of today, Ebiose uses LangChain/LangGraph to implement agents. Using the different providers of LLMs, and ML models, has been made as easy as possible.

Since June 2025, Ebiose has been integrated with LiteLLM and now offers its own cloud β€” making model management even easier.

Ebiose Cloud

The fastest and easiest way to run your forge in just a few steps with $10 free credits.

1. Create your account

Sign up at Ebiose Cloud.

2. Add your API key

Generate your Ebiose API key and add it to your model_endpoints.yml file:

ebiose:
  api_key: "your-ebiose-api-key"  # Replace with your Ebiose API key
  api_base: "https://cloud.ebiose.com/"

3. Set your default model

Specify the model to use by default:

default_endpoint_id: "azure/gpt-4o-mini"

🚧 As of June 2025, the Ebiose web app only allows you to create an API key with $10 in free credits to experiment with running your own forges. More features coming soon.

🚨 To run a forge cycle with Ebiose cloud, be sure to set it up using the dedicated CloudForgeCycleConfig class.

βœ… Supported models

Ebiose Cloud currently supports the following models:

  • azure/gpt-4o-mini
  • azure/gpt-4.1-mini
  • azure/gpt-4.1-nano
  • azure/gpt-4o

More models to come. Feel free to ask.

Using LiteLLM

Ebiose integrates with LiteLLM, either through the cloud or a self-hosted proxy.
Refer to the LiteLLM documentation to get started and generate your LiteLLM API key.

Once you have your key, update the model_endpoints.yml file as follows:

lite_llm:
  use: true                 # Set to true to enable LiteLLM
  use_proxy: false          # Set to true if using a self-hosted LiteLLM proxy
  api_key: "your-litellm-api-key"         # Replace with your LiteLLM API key
  api_base: "your-litellm-proxy-url"      # Optional: your LiteLLM proxy URL

Finally, define your LiteLLM endpoints using the appropriate model naming format:

endpoints:
  - endpoint_id: "azure/gpt-4o-mini"
    provider: "Azure OpenAI"

🚨 To run a forge cycle without Ebiose cloud, be sure to set it up using the dedicated LocalForgeCycleConfig class.

🚨 The "local" mode for running forge cycles has not been fully tested. Use with caution and report any issues. See Issue #29 for details.

Using Your Own Access to LLM Providers

You may also use your own credentials without going through LiteLLM.
To do so, define the model endpoints you want to use in the model_endpoints.yml file located at the root of the project.

Fill in your secret credentials using the examples below.

For other providers not listed here, refer to LangChain's documentation
and adapt the LangGraphLLMApi class as needed.
Issues and pull requests are welcome!

🚨 To run a forge cycle without Ebiose cloud, be sure to set it up using the dedicated LocalForgeCycleConfig class.

🚨 The "local" mode for running forge cycles has not been fully tested. Use with caution and report any issues. See Issue #29 for details.

OpenAI

To use OpenAI LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:

default_endpoint_id: "gpt-4o-mini"
endpoints:
  - endpoint_id: "gpt-4o-mini"
    provider: "OpenAI"
    api_key: "YOUR_OPENAI_API_KEY"

Azure OpenAI

To use OpenAI LLMs on Azure, fill the model_endpoints.yml file at the root of the project, with, for example:

endpoints:
  - endpoint_id: "azure/gpt-4o-mini"
    provider: "Azure OpenAI"
    api_key: "YOUR_AZURE_OPENAI_API_KEY"
    endpoint_url: "AZURE_OPENAI_ENDPOINT_URL"
    api_version: "API_VERSION"
    deployment_name: "DEPLOYMENT_NAME"

Azure ML LLMs

To use other LLMs hosted on Azure fill the model_endpoints.yml file at the root of the project, with, for example:

endpoints:
  - endpoint_id: "llama3-8b"
    provider: "Azure ML"
    api_key: "YOUR_AZURE_ML_API_KEY"
    endpoint_url: "AZURE_ENDPOINT_URL"

Anthropic (not tested yet)

To use Anthropic LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:

endpoints:
  - endpoint_id: "claude-3-sonnet-20240229"
    provider: "Anthropic"
    api_key: "YOUR_OPENAI_API_KEY"

🚨 Dont'forget to install Langchain's Anthropic library by executing uv sync --extra anthropic or pip install -U langchain-anthropic

HuggingFace (not tested yet)

To use HuggingFace LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:

endpoints:
  - endpoint_id: "microsoft/Phi-3-mini-4k-instruct"
    provider: "Hugging Face"

🚨 Dont'forget to install Langchain's Hugging Face library by executing uv sync --extra huggingface or pip install -U langchain-huggingface and login with the following:

from huggingface_hub import login
login()

OpenRouter

To use OpenRouter LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:

endpoints:
  - endpoint_id: "openrouter/quasar-alpha"
    provider: "OpenRouter"
    api_key: "YOUR_OPENROUTER_API_KEY"  # Fill in your OpenRouter API key
    endpoint_url: "https://openrouter.ai/api/v1"  # OpenRouter API endpoint URL

It needs openai library which is installed by default.

Google (not tested yet)

To use Google LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:

endpoints:
  - endpoint_id: "gemini-2.5-pro-exp-03-25"
    provider: "Google"
    api_key: "YOUR_GOOGLE_API_KEY"  # Fill in your Google API key

🚨 Don't forget to install Langchain's Google GenAI library by executing uv sync --extra google or pip install langchain-google-genai.

Ollama (not tested yet)

To use Ollama LLMs, fill the model_endpoints.yml file at the root of the project, with, for example:

endpoints:
  - endpoint_id: "ModelName"  # Replace with the actual model name, e.g., "llama3-8b"
    provider: "Ollama"
    endpoint_url: "http://<Ollama host IP>:11434/v1"

🚨 Don't forget to install Langchain's Ollama library by executing uv sync --extra ollama or pip install langchain-ollama

Others

Again, we wish to be compatible with every provider you are used to, so feel free to open an issue and contribute to expanding our LLMs' coverage. Check first if LangChain is compatible with your preferred provider here.

πŸ” Observability

🚨 Langfuse Version Warning: Ebiose currently uses Langfuse version 2.x.x. Updating to Langfuse 3.x.x is planned but not yet implemented due to compatibility issues. See Issue #28 for details.

Ebiose uses Langfuse's @observe decorator to be able to observe nested agent's traces. LangFuse can be easily self-hosted. See Langfuse's documentation to do so. Once Langfuse's server is running, you can set Langfuse credentials in your .env file by adding:

# Langfuse credentials
LANGFUSE_SECRET_KEY="your_langfuse_secret_key"
LANGFUSE_PUBLIC_KEY="your_langfuse_public_key"
LANGFUSE_HOST="your_langfuse_host"

πŸ“ Logging

Ebiose uses Loguru for logging purpose. You have nothing to do to set it up but can adapt logs to your needs easily.

πŸ†˜ Troubleshooting

Here are some common issues users might face and their solutions:

Issue 1: uv Command Not Found

Solution: Ensure uv is installed correctly. Follow the official installation guide. Alternatively, use pip:

pip install -r requirements.txt

Issue 2: Python Environment Conflicts

Solution: Use a virtual environment to isolate dependencies:

python -m venv ebiose-env
source ebiose-env/bin/activate  # On Windows: ebiose-env\Scripts\activate
uv sync  # or pip install -r requirements.txt

Issue 3: Missing API Keys

Solution: Ensure your API keys are set in the model_endpoints.yml file, for example:

endpoints:

  # OpenAI endpoints
  - endpoint_id: "gpt-4o-mini"
    provider: "OpenAI"
    api_key: "YOUR_OPENAI_API_KEY" # fill in your OpenAI API key
    

Issue 4: Jupyter Notebook Not Running

Solution: Ensure Jupyter is installed and the kernel is set correctly:

pip install notebook
jupyter notebook

Issue 5: ModuleNotFoundError

Solution: Set the .env PYTHONPATH variable as shown in the .env.example file. Alternatively, add the project root to your PYTHONPATH:

export PYTHONPATH=$PYTHONPATH:$(pwd)

πŸ“œ Code of Conduct

We are committed to fostering a welcoming and inclusive community. Please read our Code of Conduct before participating.

🀝 Contributing

We welcome contributions from the community! Here's how you can help:

  • Report Bugs: Open an issue on GitHub with detailed steps to reproduce the problem.
  • Suggest Features: Share your ideas for new features or improvements.
  • Submit Pull Requests: Fork the repository, make your changes, and submit a PR. Please follow our contribution guidelines.

For more details, check out our Contribution Guide.

πŸ“œ License

Ebiose is licensed under the MIT License. This means you're free to use, modify, and distribute the code, as long as you include the original license.

❓ Questions?

If you have any questions or need help, feel free to:

  • Open an issue on GitHub.
  • Join our Discord server.
  • Reach out to the maintainers directly.

All feedback is highly appreciated. Thanks! 🎊

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 55.8%
  • Jupyter Notebook 44.0%
  • Other 0.2%