Skip to content

Commit 3378f98

Browse files
committed
WIP AI server docs.
1 parent 6f99076 commit 3378f98

File tree

5 files changed

+134
-10
lines changed

5 files changed

+134
-10
lines changed

MyApp/_pages/ai-server/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ AI Server simplifies the integration and management of AI capabilities in your a
3838
## Getting Started for Developers
3939

4040
1. **Setup**: Follow the Quick Start guide to deploy AI Server.
41-
2. **Configuration**: Use the Admin UI to add your AI providers and generate API keys.
41+
2. **Configuration**: Use the Admin Portal to add your AI providers and generate API keys.
4242
3. **Integration**: Choose your preferred language and use ServiceStack's Add ServiceStack Reference to generate type-safe client libraries.
4343
4. **Development**: Start making API calls to AI Server from your application, leveraging the full suite of AI capabilities.
4444

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
---
2+
title: ComfyUI Extension
3+
description: Installing and configuring the ComfyUI extension
4+
---
5+
6+
# ComfyUI Extension
7+
8+
ComfyUI is a powerful image generation and manipulation tool that can be used to create images from text, images from images, and more. It is a key component of AI Server that provides a wide range of image processing capabilities.
9+
As a way to leverage the ComfyUI API in a more accessible manner, we have support for ComfyUI as a provider type in AI Server. This allows you to easily integrate ComfyUI into your AI Server instance using it as a remote self-hosted agent capable of processing image requests, and other modalities.
10+
11+
## Installing the ComfyUI Extension
12+
13+
To install this more easily, [we have put together a Docker image and a Docker Compose file](https://github.com/serviceStack/agent-comfy) that you can use to get started with ComfyUI in AI Server that is already bundled with the ComfyUI extension, and all the necessary dependencies.
14+
15+
### Running the ComfyUI Extension
16+
17+
To run the ComfyUI extension, you can use the following steps:
18+
19+
1. **Clone the Repository**: Clone the ComfyUI extension repository from GitHub.
20+
21+
```sh
22+
git clone https://github.com/ServiceStack/agent-comfy.git
23+
```
24+
25+
2. **Edit the example.env File**: Update the example.env file with your desired settings.
26+
27+
```sh
28+
cp example.env .env
29+
```
30+
31+
And then edit the `.env` file with your desired settings:
32+
33+
```sh
34+
DEFAULT_MODELS=sdxl-lightning,flux-schnell
35+
API_KEY=your_agent_api_key
36+
HF_TOKEN=your_hf_token
37+
CIVITAI_TOKEN=your_civitai_api_key
38+
```
39+
40+
3. **Run the Docker Compose**: Start the ComfyUI extension with Docker Compose.
41+
42+
```sh
43+
docker compose up
44+
```
45+
46+
### .env Configuration
47+
48+
The `.env` file is used to configure the ComfyUI extension during the initial setup, and is the easiest way to get started.
49+
50+
The keys available in the `.env` file are:
51+
52+
- **DEFAULT_MODELS**: Comma-separated list of models to load on startup. This will be used to automatically download the models and their related dependencies. The full list of options can be found on your AI Server at `/lib/data/ai-models.json`.
53+
- **API_KEY**: This is the API key that will be used by your AI Server to authenticate with the ComfyUI. If not provided, there will be no authentication required to access your ComfyUI instance.
54+
- **HF_TOKEN**: This is the Hugging Face token that will be used to authenticate with the Hugging Face API when trying to download models. If not provided, models requiring Hugging Face authentication like those with user agreements will not be downloaded.
55+
- **CIVITAI_TOKEN**: This is the Civitai API key that will be used to authenticate with the Civitai API when trying to download models. If not provided, models requiring Civitai authentication like those with user agreements will not be downloaded.
56+
57+
> Models requiring authentication to download are also flagged in the `/lib/data/ai-models.json` file.
58+
59+
### Accessing the ComfyUI Extension
60+
61+
Once the ComfyUI extension is running, you can access the ComfyUI instance at [http://localhost:7860](http://localhost:7860) and can be used as a standard ComfyUI instance.
62+
The AI Server has pre-defined workflows to interact with your ComfyUI instance to generate images, audio, text, and more.
63+
64+
These workflows are found in the AI Server AppHost project under `workflows`. These are templated JSON versions of workflows you save in the ComfyUI web interface.
65+
66+
### Advanced Configuration
67+
68+
ComfyUI workflows can be changed or overridden on a per model basis by editing the `workflows` folder in the AI Server AppHost project. Flux Schnell is an example of overriding text-to-image for just a single workflow for which the code can be found in `AiServer/Configure.AppHost.cs`.

MyApp/_pages/ai-server/install/configuration.md

Lines changed: 18 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -5,25 +5,27 @@ title: Configuring AI Server
55
# Configuring AI Server
66

77
AI Server makes orchestration of various AI providers easy by providing a unified gateway to process LLM, AI, and image transformation requests.
8-
It comes with an Admin Dashboard that allows you to configure your AI providers and generate API keys to control access.
8+
It comes with an Admin Portal that allows you to configure your AI providers and generate API keys to control access.
99

10-
## Accessing the Admin Dashboard
10+
## Accessing the Admin Portal
1111

1212
Running AI Server will land you on a page showing access to:
1313

14-
- **[Admin Dashboard](http://localhost:5005/admin)**: Centralized management of AI providers and API keys.
14+
- **[Admin Portal](http://localhost:5005/admin)**: Centralized management of AI providers and API keys.
1515
- **[Admin UI](http://localhost:5005/admin-ui**: ServiceStack built in Admin UI to manage your AI Server.
1616
- **[API Explorer](http://localhost:5005/ui**: Explore and test the AI Server API endpoints in a friendly UI.
1717
- **[AI Server Documentation](https://docs.servicestack.net/ai-server/)**: Detailed documentation on how to use AI Server.
1818

19+
> The default credentials to access the Admin Portal are `p@55wOrd`, this can be changed in your `.env` file by setting the `AUTH_SECRET` key.
20+
1921
## Configuring AI Providers
2022

2123
AI Providers are the external LLM based services like OpenAI, Google, Mistral etc that AI Server interacts with to process Chat requests.
2224

2325
There are two ways to configure AI Providers:
2426

2527
1. **.env File**: Update the `.env` file with your API keys and run the AI Server for the first time.
26-
2. **Admin Dashboard**: Use the Admin Dashboard to add, edit, or remove AI Providers and generate AI Server API keys.
28+
2. **Admin Portal**: Use the Admin Portal to add, edit, or remove AI Providers and generate AI Server API keys.
2729

2830
### Using the .env File
2931

@@ -39,11 +41,11 @@ The .env file is located in the root of the AI Server repository and contains th
3941

4042
Providing the API keys in the .env file will automatically configure the AI Providers when you run the AI Server for the first time.
4143

42-
### Using the Admin Dashboard
44+
### Using the Admin Portal
4345

44-
The Admin Dashboard provides a more interactive way to manage your AI Providers after the AI Server is running.
46+
The Admin Portal provides a more interactive way to manage your AI Providers after the AI Server is running.
4547

46-
To access the Admin Dashboard:
48+
To access the Admin Portal:
4749

4850
1. Navigate to [http://localhost:5005/admin](http://localhost:5005/admin).
4951
2. Log in with the default credentials `p@55wOrd`.
@@ -62,10 +64,18 @@ AI Server supports the following AI Providers:
6264

6365
## Generating AI Server API Keys
6466

65-
API keys are used to authenticate requests to AI Server and are generated via the Admin Dashboard.
67+
API keys are used to authenticate requests to AI Server and are generated via the Admin Portal.
6668

6769
Here you can create new API keys, view existing keys, and revoke keys as needed.
6870

6971
Keys can be created with expiration dates, and restrictions to specific API endpoints, along with notes to help identify the key's purpose.
7072

7173

74+
## Stored File Management
75+
76+
AI Server stores results of the AI operations in a pre-configured paths.
77+
78+
- **Artifacts**: AI generated images, audio, and video files, default path is `App_Data/artifacts`.
79+
- **Files**: Cached variants and processed files, default path is `App_Data/files`.
80+
81+
These paths can be configured in the `.env` file by setting the `ARTIFACTS_PATH` and `AI_FILES_PATH` keys.

MyApp/_pages/ai-server/install/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ docker compose up
5656

5757
## Accessing AI Server
5858

59-
Once the AI Server is running, you can access the Admin UI at [http://localhost:5005](http://localhost:5005) to configure your AI providers and generate API keys.
59+
Once the AI Server is running, you can access the Admin Portal at [http://localhost:5005/admin](http://localhost:5005/admin) to configure your AI providers and generate API keys.
6060
If you first ran the AI Server with configured API Keys in your `.env` file, you providers will be automatically configured for the related services.
6161

6262
> You can reset the process by deleting your local `App_Data` directory and rerunning `docker compose up`.
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
---
2+
title: Self-hosted AI Providers with Ollama
3+
---
4+
5+
# Self-hosted AI Providers with Ollama
6+
7+
Ollama can be used as an AI Provider type to process LLM requests in AI Server.
8+
9+
## Setting up Ollama
10+
11+
When using Ollama as an AI Provider, you will need to ensure the models you want to use are available in your Ollama instance.
12+
13+
This can be done via the command `ollama pull <model-name>` to download the model from the [Ollama library](https://ollama.com/library).
14+
15+
Once the model is downloaded, and your Ollama instance is running and accessible to AI Server, you can configure Ollama as an AI Provider in AI Server Admin Portal.
16+
17+
## Configuring Ollama in AI Server
18+
19+
Navigating to the Admin Portal in AI Server, select the **AI Providers** menu item on the left sidebar.
20+
21+
![AI Providers](/images/ai-server/ai-providers.png)
22+
23+
Click on the **New Provider** button at the top of the grid.
24+
25+
![New Provider](/images/ai-server/new-provider.png)
26+
27+
Select Ollama as the Provider Type at the top of the form, and fill in the required fields:
28+
29+
- **Name**: A friendly name for the provider.
30+
- **Endpoint**: The URL of your Ollama instance, eg `http://localhost:11434`.
31+
- **API Key**: Optional API key to authenticate with your Ollama instance.
32+
- **Priority**: The priority of the provider, used to determine the order of provider selection if multiple provide the same model.
33+
34+
![Ollama Provider](/images/ai-server/ollama-provider.png)
35+
36+
Once the URL and API Key are set, requests will be made to your Ollama instance to list available models. These will then be displayed as options to enable for the provider you are configuring.
37+
38+
![Ollama Models](/images/ai-server/ollama-models.png)
39+
40+
Select the models you want to enable for this provider, and click **Save** to save the provider configuration.
41+
42+
## Using Ollama models in AI Server
43+
44+
Once configured, you can make requests to AI Server to process LLM requests using the models available in your Ollama instance.
45+
46+
Model names in AI Server are common across all providers, enabling you to switch or load balance between providers without changing your client code. See [Usage](https://docs.servicestack.net/ai-server/usage/) for more information on making requests to AI Server.

0 commit comments

Comments
 (0)