Welcome to the Ollaix API! This is the backend component of the Ollaix project, designed to serve as a versatile bridge between your applications and various Large Language Models (LLMs). π€
This API provides a unified interface to interact with different AI providers, including local models via Ollama and powerful cloud models like Google's Gemini. It is built with performance and ease of use in mind, using the modern Litestar Python framework.
The full Ollaix stack includes this backend, the Ollaix UI frontend, and a service for running Ollama models, all containerized for simple deployment.
Experience Ollaix live here: https://ollaix.macktireh.dev
demo.mp4

- π Unified API Gateway: Single endpoint for multiple LLM providers.
- π€ Multi-Provider Support: Out-of-the-box integration for Ollama and Google Gemini.
- π Streaming Support: Real-time, non-blocking streaming for chat completions.
- π Model Discovery: An endpoint to dynamically list all available models from the configured providers.
- π³ Containerized: Fully containerized with Docker and Docker Compose for easy setup and deployment.
- β‘ Modern & Fast: Built with Python and the high-performance Litestar web framework.
- π Scalable Design: A clean, service-oriented architecture that is easy to extend with new AI providers.
The complete Ollaix project stack includes:
- π Backend: Python, Litestar
- π§ LLM Integrations: Ollama, Google Gen AI SDK
- π³ Containerization: Docker
- π¦ Package Management: PDM
- βοΈ Frontend: React, TypeScript, Vite, Tailwind CSS, DaisyUI
To get the Ollaix API up and running on your local machine, follow these steps.
- Docker and Docker Compose
- A Google AI Studio API Key for Gemini integration (optional, but if you want to use Gemini models, you'll need this).
-
Clone the repository:
git clone https://github.com/Macktireh/ollaix.git cd ollaix
-
Create an environment file: Create a
.env
file in the root of the project by copying the example file:cp .env.example .env
-
Configure your environment: Open the
.env
file and add your Google Gemini API key if you want to use Gemini models. If you don't have a key, you can skip this step, but Ollama models will still be available.:# .env GEMINI_API_KEY="YOUR_GEMINI_API_KEY"
-
Launch the application: Use Docker Compose to build and start all the services (API, Ollama):
docker compose up --build
The services will be available at the following URLs:
- Ollaix API:
http://localhost:8000
- API Documentation (Scalar):
http://localhost:8000/
- Ollaix API:
Once the service is running, you can interact with the following main endpoints:
Method | Path | Description |
---|---|---|
GET |
/ |
Displays the API documentation (Scalar). |
GET |
/v1/models |
Lists all available models from all providers. |
POST |
/v1/chat/completions |
Main endpoint to stream chat responses from an LLM. |
The Ollaix API is designed to be easily integrated with the Ollaix UI, which provides a user-friendly interface for interacting with the API. The UI is built with React and TypeScript, leveraging the API's capabilities to create a seamless chat experience. For more details on how to set up the frontend, please refer to the Ollaix UI repository.
Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature
) - Commit your Changes (
git commit -m 'Add some AmazingFeature'
) - Push to the Branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Please ensure your pull request provides a clear description of the problem and solution. Include the relevant issue number if applicable.
This project is distributed under the MIT License. See the LICENSE
file for more information.
Made with β€οΈ by Macktireh