Ready-to-use customizable multi-agent AI system that combines plug-and-play simplicity with framework-level flexibility
🚀 Quick Start • 🤖 Try Demo • 🔧 Configuration • 🎯 Features • 💡 Use Cases
evi-run is a powerful, production-ready multi-agent AI system that bridges the gap between out-of-the-box solutions and custom AI frameworks. Built on Python with OpenAI Agents SDK integration, it delivers enterprise-grade AI capabilities through an intuitive Telegram bot interface.
- 🚀 Instant Deployment - Get your AI system running in minutes, not hours
- 🔧 Ultimate Flexibility - Framework-level customization capabilities
- 📊 Built-in Analytics - Comprehensive usage tracking and insights
- 💬 Telegram Integration - Seamless user experience through familiar messaging interface
- 🏗️ Scalable Architecture - Grows with your needs from prototype to production
- Memory Management - Context control and long-term memory
- Knowledge Integration - Dynamic knowledge base expansion
- Document Processing - Handle PDFs, images, and various file formats
- Deep Research - Multi-step investigation and analysis
- Web Intelligence - Smart internet search and data extraction
- Image Generation - AI-powered visual content creation
- DEX Analytics - Real-time decentralized exchange monitoring
- Token Swap - Easy, fast and secure token swap
- Multi-Agent Orchestration - Complex task decomposition and execution
- Custom Agent Creation - Build specialized AI agents for specific tasks
- Private Mode - Personal use for bot owner only
- Free Mode - Public access with configurable usage limits
- Pay Mode - Monetized system with balance management and payments
- NSFW Mode - Unrestricted topic exploration and content generation
- Task Scheduler - Automated agent task planning and execution
- Automatic Limit Orders - Smart trading with automated take-profit and stop-loss functionality
Component | Technology |
---|---|
Core Language | Python 3.9+ |
AI Framework | OpenAI Agents SDK |
Communication | MCP (Model Context Protocol) |
Blockchain | Solana RPC |
Interface | Telegram Bot API |
Database | PostgreSQL |
Cache | Redis |
Deployment | Docker & Docker Compose |
Get evi-run running in under 5 minutes with our streamlined Docker setup:
System Requirements:
- Ubuntu 22.04 server (ensure location is not blocked by OpenAI)
- Root or sudo access
- Internet connection
Required API Keys & Tokens:
- Telegram Bot Token - Create bot via @BotFather
- OpenAI API Key - Get from OpenAI Platform
- Your Telegram ID - Get from @userinfobot
-
Download and prepare the project:
# Navigate to installation directory cd /opt # Clone the project from GitHub git clone https://github.com/pipedude/evi-run.git # Set proper permissions sudo chown -R $USER:$USER evi-run cd evi-run
-
Configure environment variables:
# Copy example configuration cp .env.example .env # Edit configuration files nano .env # Add your API keys and tokens nano config.py # Set your Telegram ID and preferences
-
Run automated Docker setup:
# Make setup script executable chmod +x docker_setup_en.sh # Run Docker installation ./docker_setup_en.sh
-
Launch the system:
# Build and start containers docker compose up --build -d
-
Verify installation:
# Check running containers docker compose ps # View logs docker compose logs -f
🎉 That's it! Your evi-run system is now live. Open your Telegram bot and start chatting!
# REQUIRED: Telegram Bot Token from @BotFather
TELEGRAM_BOT_TOKEN=your_bot_token_here
# REQUIRED: OpenAI API Key
API_KEY_OPENAI=your_openai_api_key
# REQUIRED: Your Telegram User ID
ADMIN_ID = 123456789
# Usage Mode: 'private', 'free', or 'pay'
TYPE_USAGE = 'private'
Mode | Description | Best For |
---|---|---|
Private | Bot owner only | Personal use, development, testing |
Free | Public access with limits | Community projects, demos |
Pay | Monetized with balance system | Commercial applications, SaaS |
To activate Pay mode at this time, please contact the project (developer) who will guide you through the process.
Note: In future releases, project tokens will be publicly available for purchase, and the activation process will be fully automated through the bot interface.
Create engaging AI personalities for entertainment, education, or brand representation.
Deploy intelligent support bots that understand context and provide helpful solutions.
Build your own AI companion for productivity, research, and daily tasks.
Automate data processing, generate insights, and create reports from complex datasets.
Launch trading agents for DEX with real-time analytics.
Leverage the framework to build specialized AI agents for any domain or industry.
By default, the system is configured for optimal performance and low cost of use. For professional and specialized use cases, proper model selection is crucial for optimal performance and cost efficiency.
For Deep Research and Complex Analysis:
o3-deep-research
- Most powerful deep research model for complex multi-step research taskso4-mini-deep-research
- Faster, more affordable deep research model
For maximum research capabilities using specialized deep research models:
-
Use o3-deep-research for most powerful analysis in
bot/agents_tools/agents_.py
:deep_agent = Agent( name="Deep Agent", model="o3-deep-research", # Most powerful deep research model # ... instructions )
-
Alternative: Use o4-mini-deep-research for cost-effective deep research:
deep_agent = Agent( name="Deep Agent", model="o4-mini-deep-research", # Faster, more affordable deep research # ... instructions )
-
Update Main Agent instructions to prevent summarization:
- Locate the main agent instructions in the same file
- Ensure the instruction includes: "VERY IMPORTANT! Do not generalize the answers received from the deep_knowledge tool, especially for deep research, provide them to the user in full, in the user's language."
For the complete list of available models, capabilities, and pricing, see the OpenAI Models Documentation.
evi-run uses the Agents library with a multi-agent architecture where specialized agents are integrated as tools into the main agent. All agent configuration is centralized in:
bot/agents_tools/agents_.py
1. Create the Agent
# Add after existing agents
custom_agent = Agent(
name="Custom Agent",
instructions="Your specialized agent instructions here...",
model="gpt-5-mini",
model_settings=ModelSettings(
reasoning=Reasoning(effort="low"),
extra_body={"text": {"verbosity": "medium"}}
),
tools=[WebSearchTool(search_context_size="medium")] # Optional tools
)
2. Register as Tool in Main Agent
# In create_main_agent function, add to main_agent.tools list:
main_agent = Agent(
# ... existing configuration
tools=[
# ... existing tools
custom_agent.as_tool(
tool_name="custom_function",
tool_description="Description of what this agent does"
),
]
)
Main Agent (Evi) Personality:
Edit the detailed instructions in the main_agent
instructions block:
- Character profile and personality
- Expertise areas
- Communication style
- Behavioral patterns
Agent Parameters:
name
: Agent identifierinstructions
: System prompt and behaviormodel
: OpenAI model (gpt-5
,gpt-5-mini
, etc.)model_settings
: Model settings (Reasoning, extra_body, etc.)tools
: Available tools (WebSearchTool, FileSearchTool, etc.)mcp_servers
: MCP server connections
evi-run supports non-OpenAI models through the Agents library. There are several ways to integrate other LLM providers:
Method 1: LiteLLM Integration (Recommended)
Install the LiteLLM dependency:
pip install "openai-agents[litellm]"
Use models with the litellm/
prefix:
# Claude via LiteLLM
claude_agent = Agent(
name="Claude Agent",
instructions="Your instructions here...",
model="litellm/anthropic/claude-3-5-sonnet-20240620",
# ... other parameters
)
# Gemini via LiteLLM
gemini_agent = Agent(
name="Gemini Agent",
instructions="Your instructions here...",
model="litellm/gemini/gemini-2.5-flash-preview-04-17",
# ... other parameters
)
Method 2: LitellmModel Class
from agents.extensions.models.litellm_model import LitellmModel
custom_agent = Agent(
name="Custom Agent",
instructions="Your instructions here...",
model=LitellmModel(model="anthropic/claude-3-5-sonnet-20240620", api_key="your-api-key"),
# ... other parameters
)
Method 3: Global OpenAI Client
from agents.models._openai_shared import set_default_openai_client
from openai import AsyncOpenAI
# For providers with OpenAI-compatible API
set_default_openai_client(AsyncOpenAI(
base_url="https://api.provider.com/v1",
api_key="your-api-key"
))
Documentation & Resources:
- Model Configuration Guide - Complete setup documentation
- LiteLLM Integration - Detailed LiteLLM usage
- Supported Models - Full list of LiteLLM providers
Important Notes:
- Most LLM providers don't support the Responses API yet
- If not using OpenAI, consider disabling tracing:
set_tracing_disabled()
- You can mix different providers for different agents
- Focused Instructions: Each agent should have a clear, specific purpose
- Model Selection: Use appropriate models for complexity (gpt-5 vs gpt-5-mini)
- Tool Integration: Leverage WebSearchTool, FileSearchTool, and MCP servers
- Naming Convention: Use descriptive tool names for main agent clarity
- Testing: Test agent responses in isolation before integration
Customizing Bot Interface Messages:
All bot messages and interface text are stored in the I18N
directory and can be fully customized to match your needs:
I18N/
├── factory.py # Translation loader
├── en/
│ └── txt.ftl # English messages
└── ru/
└── txt.ftl # Russian messages
Message Files Format:
The bot uses Fluent localization format (.ftl
files) for multi-language support:
To customize messages:
- Edit the appropriate
.ftl
file inI18N/en/
orI18N/ru/
- Restart the bot container for changes to take effect
- Add new languages by creating new subdirectories with
txt.ftl
files
evi-run includes comprehensive tracing and analytics capabilities through the OpenAI Agents SDK. The system automatically tracks all agent operations and provides detailed insights into performance and usage.
Automatic Tracking:
- Agent Runs - Each agent execution with timing and results
- LLM Generations - Model calls with inputs/outputs and token usage
- Function Calls - Tool usage and execution details
- Handoffs - Agent-to-agent interactions
- Audio Processing - Speech-to-text and text-to-speech operations
- Guardrails - Safety checks and validations
For ethical reasons, owners of public bots should either explicitly inform users about this, or disable Tracing.
# Disable Tracking in `bot/agents_tools/agents_.py`
set_tracing_disabled(True)
evi-run supports integration with 20+ monitoring and analytics platforms:
Popular Integrations:
- Weights & Biases - ML experiment tracking
- LangSmith - LLM application monitoring
- Arize Phoenix - AI observability
- Langfuse - LLM analytics
- AgentOps - Agent performance tracking
- Pydantic Logfire - Structured logging
Enterprise Solutions:
- Braintrust - AI evaluation platform
- MLflow - ML lifecycle management
- Portkey AI - AI gateway and monitoring
Docker Container Logs:
# View all logs
docker compose logs
# Follow specific service
docker compose logs -f bot
# Database logs
docker compose logs postgres_agent_db
# Filter by time
docker compose logs --since 1h bot
- Complete Tracing Guide - Full tracing documentation
- Analytics Integration List - All supported platforms
Bot not responding:
# Check bot container status
docker compose ps
docker compose logs bot
Database connection errors:
# Restart database
docker compose restart postgres_agent_db
docker compose logs postgres_agent_db
Memory issues:
# Check system resources
docker stats
- Community: Telegram Support Group
- Issues: GitHub Issues
- Telegram: @playa3000
- CPU: 2 cores
- RAM: 2GB
- Storage: 10GB
- Network: Stable internet connection
- CPU: 2+ cores
- RAM: 4GB+
- Storage: 20GB+ SSD
- Network: High-speed connection
- API Keys: Store securely in environment variables
- Database: Use strong passwords and restrict access
- Network: Configure firewalls and use HTTPS
- Updates: Keep dependencies and Docker images updated
This project is licensed under the MIT License - see the LICENSE file for details.
We welcome contributions! Please see our Contributing Guidelines for details.
- Telegram: @playa3000
- Community: Telegram Support Group
Made with ❤️ by the evi-run team
⭐ Star this repository if evi-run helped you build amazing AI experiences! ⭐