A lightweight bridge to connect OpenAI-compatible and KoboldAI endpoints to the AI Power Grid distributed inference network.
This bridge enables you to contribute your local or remote LLM endpoints to the AI Power Grid network. It supports:
- OpenAI API-compatible endpoints
- KoboldAI-compatible endpoints
- Multiple workers with different configurations
- Automatic model name prefixing for endpoint identification
- Install dependencies:
pip install -r requirements.txt
- Copy
bridgeData_template.yaml
tobridgeData.yaml
and configure your endpoints:
# Global settings
horde_url: "https://api.aipowergrid.io/"
api_key: "your-api-key-here"
queue_size: 0
# Example configurations
endpoints:
# OpenAI-compatible endpoint
- type: "openai"
name: "openai-endpoint"
api_key: "your-api-key"
url: "https://api.openai.com/v1"
models:
- name: "gpt35-worker"
model: "gpt-3.5-turbo"
max_threads: 1
max_length: 512
max_context_length: 4096
# Local KoboldAI endpoint
- type: "koboldai"
name: "local-kobold"
url: "http://localhost:5000"
models:
- name: "local-model"
max_threads: 1
max_length: 512
max_context_length: 4096
- Start the bridge:
python start_worker.py
horde_url
: AI Power Grid API endpointapi_key
: Your Grid API keyqueue_size
: Request queue size (0 for unlimited)
type
: API type ("openai" or "koboldai")name
: Endpoint identifierurl
: Base API URLapi_key
: API key for OpenAI-compatible endpoints
name
: Worker instance namemodel
: Model identifier (OpenAI-compatible only)max_threads
: Concurrent request limitmax_length
: Maximum generation lengthmax_context_length
: Maximum input context length
- Model names are automatically prefixed with the endpoint domain
- Local/IP endpoints use "gridbridge" prefix
- Each model configuration creates a separate worker thread
Contributions welcome! Please submit issues and pull requests on GitHub.