title | emoji | colorFrom | colorTo | sdk | app_port | env | secrets | ||||
---|---|---|---|---|---|---|---|---|---|---|---|
OpenAI to Flowith Converter |
🔄 |
blue |
green |
docker |
7860 |
|
|
This project provides a simple API endpoint that accepts OpenAI-compatible chat completion requests and forwards them to the Flowith API, translating the request and response formats as needed. It's designed to be run easily using Docker, both locally and on Hugging Face Spaces.
- Environment Variables: Create a
.env
file in the project root by copying the example:cp .env.example .env
(or manually create it). - Flowith Token: Open the
.env
file and replace the placeholder with your actual Flowith authorization token:FLOWITH_AUTH_TOKEN=your_actual_token_here
- Model Mappings (Optional): If you need to use different Flowith models or map OpenAI model names differently, you can update the
models.json
file. - API Key (Optional): By default, the API uses the key
123456
. If you want to use a different key, set theAPI_KEY
environment variable in your.env
file (for local runs) or as a secret namedAPI_KEY
in Hugging Face Spaces.
To build and run the service locally using Docker Compose:
docker-compose up --build
The API will then be accessible at http://localhost:8099/v1/chat/completions
.
This repository is configured for deployment on Hugging Face Spaces using Docker.
- Create a new Space on Hugging Face, selecting "Docker" as the SDK.
- Link this repository to your Space.
- Navigate to your Space's "Settings" page.
- Go to the "Secrets" section.
- Add a new secret with the name
FLOWITH_AUTH_TOKEN
and paste your actual Flowith authorization token as the value. The application will automatically read this secret.
The Space will build the Docker image and start the service. The API endpoint will be available at your Space's URL (e.g., https://your-username-your-space-name.hf.space/v1/chat/completions
).
- URL:
/v1/chat/completions
- Method:
POST
- Request Body: Send a JSON payload conforming to the OpenAI Chat Completions API schema (e.g., specifying
model
,messages
,stream
, etc.). Themodel
field should correspond to a key inmodels.json
. - Authentication: Requests must include an
Authorization
header with your API key. Use the formatBearer your_api_key
. For example, if using the default key, the header would beAuthorization: Bearer 123456
. - Response: The API will return either a standard JSON response or a server-sent event stream, mimicking the OpenAI API behavior based on the
stream
parameter in the request. - URL:
/v1/models
- Method:
GET
- Description: Returns a list of available models supported by the proxy, based on the configuration in
models.json
. - Authentication: Requires the same
Authorization: Bearer <API_KEY>
header as the chat completions endpoint.