You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Installing and configuring the ComfyUI extension
4
+
---
5
+
6
+
# ComfyUI Extension
7
+
8
+
ComfyUI is a powerful image generation and manipulation tool that can be used to create images from text, images from images, and more. It is a key component of AI Server that provides a wide range of image processing capabilities.
9
+
As a way to leverage the ComfyUI API in a more accessible manner, we have support for ComfyUI as a provider type in AI Server. This allows you to easily integrate ComfyUI into your AI Server instance using it as a remote self-hosted agent capable of processing image requests, and other modalities.
10
+
11
+
## Installing the ComfyUI Extension
12
+
13
+
To install this more easily, [we have put together a Docker image and a Docker Compose file](https://github.com/serviceStack/agent-comfy) that you can use to get started with ComfyUI in AI Server that is already bundled with the ComfyUI extension, and all the necessary dependencies.
14
+
15
+
### Running the ComfyUI Extension
16
+
17
+
To run the ComfyUI extension, you can use the following steps:
18
+
19
+
1.**Clone the Repository**: Clone the ComfyUI extension repository from GitHub.
2. **Edit the example.env File**: Update the example.env file with your desired settings.
26
+
27
+
```sh
28
+
cp example.env .env
29
+
```
30
+
31
+
And then edit the `.env` file with your desired settings:
32
+
33
+
```sh
34
+
DEFAULT_MODELS=sdxl-lightning,flux-schnell
35
+
API_KEY=your_agent_api_key
36
+
HF_TOKEN=your_hf_token
37
+
CIVITAI_TOKEN=your_civitai_api_key
38
+
```
39
+
40
+
3. **Run the Docker Compose**: Start the ComfyUI extension with Docker Compose.
41
+
42
+
```sh
43
+
docker compose up
44
+
```
45
+
46
+
### .env Configuration
47
+
48
+
The `.env` file is used to configure the ComfyUI extension during the initial setup, and is the easiest way to get started.
49
+
50
+
The keys available in the `.env` file are:
51
+
52
+
- **DEFAULT_MODELS**: Comma-separated list of models to load on startup. This will be used to automatically download the models and their related dependencies. The full list of options can be found on your AI Server at `/lib/data/ai-models.json`.
53
+
- **API_KEY**: This is the API key that will be used by your AI Server to authenticate with the ComfyUI. If not provided, there will be no authentication required to access your ComfyUI instance.
54
+
- **HF_TOKEN**: This is the Hugging Face token that will be used to authenticate with the Hugging Face API when trying to download models. If not provided, models requiring Hugging Face authentication like those with user agreements will not be downloaded.
55
+
- **CIVITAI_TOKEN**: This is the Civitai API key that will be used to authenticate with the Civitai API when trying to download models. If not provided, models requiring Civitai authentication like those with user agreements will not be downloaded.
56
+
57
+
> Models requiring authentication to download are also flagged in the `/lib/data/ai-models.json` file.
58
+
59
+
### Accessing the ComfyUI Extension
60
+
61
+
Once the ComfyUI extension is running, you can access the ComfyUI instance at [http://localhost:7860](http://localhost:7860) and can be used as a standard ComfyUI instance.
62
+
The AI Server has pre-defined workflows to interact with your ComfyUI instance to generate images, audio, text, and more.
63
+
64
+
These workflows are found in the AI Server AppHost project under `workflows`. These are templated JSON versions of workflows you save in the ComfyUI web interface.
65
+
66
+
### Advanced Configuration
67
+
68
+
ComfyUI workflows can be changed or overridden on a per model basis by editing the `workflows` folder in the AI Server AppHost project. Flux Schnell is an example of overriding text-to-image forjust a single workflow for which the code can be foundin`AiServer/Configure.AppHost.cs`.
Copy file name to clipboardExpand all lines: MyApp/_pages/ai-server/install/index.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ docker compose up
56
56
57
57
## Accessing AI Server
58
58
59
-
Once the AI Server is running, you can access the Admin UI at [http://localhost:5005](http://localhost:5005) to configure your AI providers and generate API keys.
59
+
Once the AI Server is running, you can access the Admin Portal at [http://localhost:5005/admin](http://localhost:5005/admin) to configure your AI providers and generate API keys.
60
60
If you first ran the AI Server with configured API Keys in your `.env` file, you providers will be automatically configured for the related services.
61
61
62
62
> You can reset the process by deleting your local `App_Data` directory and rerunning `docker compose up`.
Ollama can be used as an AI Provider type to process LLM requests in AI Server.
8
+
9
+
## Setting up Ollama
10
+
11
+
When using Ollama as an AI Provider, you will need to ensure the models you want to use are available in your Ollama instance.
12
+
13
+
This can be done via the command `ollama pull <model-name>` to download the model from the [Ollama library](https://ollama.com/library).
14
+
15
+
Once the model is downloaded, and your Ollama instance is running and accessible to AI Server, you can configure Ollama as an AI Provider in AI Server Admin Portal.
16
+
17
+
## Configuring Ollama in AI Server
18
+
19
+
Navigating to the Admin Portal in AI Server, select the **AI Providers** menu item on the left sidebar.
Once the URL and API Key are set, requests will be made to your Ollama instance to list available models. These will then be displayed as options to enable for the provider you are configuring.
Select the models you want to enable for this provider, and click **Save** to save the provider configuration.
41
+
42
+
## Using Ollama models in AI Server
43
+
44
+
Once configured, you can make requests to AI Server to process LLM requests using the models available in your Ollama instance.
45
+
46
+
Model names in AI Server are common across all providers, enabling you to switch or load balance between providers without changing your client code. See [Usage](https://docs.servicestack.net/ai-server/usage/) for more information on making requests to AI Server.
0 commit comments