This project sets up an Open WebUI interface with LiteLLM as a backend proxy for various AI models. It uses Docker Compose to orchestrate the services.
- Docker and Docker Compose installed on your system
- API keys for the AI models you want to use
-
Create a
.env
file in the project root with the following content:MASTER_KEY=your_master_key #required ANTHROPIC_API_KEY=your_anthropic_api_key OPENAI_API_KEY=your_openai_api_key DEEPSEEK_API_KEY=your_deepseek_api_key CODESTRAL_API_KEY=your_codestral_api_key GROQ_API_KEY=your_groq_api_key
Replace
your_*_api_key
with your actual API keys. -
Ensure the
config.yml
file is present in the project root. This file configures the available models for LiteLLM. Feel free to add more models here!
-
Start the services:
docker-compose up -d
-
Access the Open WebUI interface at
http://localhost:3000
-
Access LiteLLM OpenAPI page at
http://localhost:4000
- Open WebUI: Runs on port 3000, provides the user interface.
- LiteLLM: Runs on port 4000, acts as a proxy for various AI models.
The following models are configured in config.yml
:
- claude-3-5-sonnet (Anthropic)
- gpt-4o (OpenAI)
- gpt-4o-mini (OpenAI)
- deepseek-coder (DeepSeek)
- deepseek-chat (DeepSeek)
- codestral (Codestral)
- groq-llama-3.1-405b (Groq)
- groq-llama-3.1-70b (Groq)
- groq-llama-3.1-8b (Groq)
This setup uses environment variables to manage API keys. Ensure that your .env
file is not committed to version control and is properly secured.
If you encounter any issues, check the Docker logs for each service: