Skip to content

Add support for LLM configuration inside the app #14

@Dobidop

Description

@Dobidop

Problem

The application currently relies on hardcoded(?) endpoints for LLM and diffusion model sources, making it inflexible and limited in terms of backend model providers.

Issue

Users are unable to reconfigure the application to point to alternative LLM or diffusion model backends (e.g., local models or self-hosted APIs). This design prevents the use of local models and restricts the app to a single model provider, such as OpenAI.

Proposed Solution

Introduce a configurable setting inside the app that allows users to specify a custom API endpoint (compatible with OpenAI-like APIs). The application could query the /models endpoint (or equivalent) to dynamically fetch and list the available models. This would enable support for local models or other third-party APIs without code modification.

Image generation from alternative diffusion models (perhaps via a comfyui instance) is of less importance, but would be useful too.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions