Skip to content

Add Support for More Local LLM Inferencing Tools #2

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jmemcc opened this issue Jan 23, 2025 · 0 comments
Open

Add Support for More Local LLM Inferencing Tools #2

jmemcc opened this issue Jan 23, 2025 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@jmemcc
Copy link
Owner

jmemcc commented Jan 23, 2025

Issue

Setup currently supports only Ollama for local LLM inferencing. Expanding support to other local LLM inferencing tools (e.g. vLLM, llama.cpp, koboldcpp) will allow users to work with the inference tooling they are most comfortable with, or best suits their needs.

Solution

Add support for other inference tools into the chat interface, and alter the Docker setup to allow the inference tools to be run inside the Docker container, not just via a bridge to the host's machine.

@jmemcc jmemcc added the enhancement New feature or request label Jan 23, 2025
@jmemcc jmemcc self-assigned this Jan 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant