You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Setup currently supports only Ollama for local LLM inferencing. Expanding support to other local LLM inferencing tools (e.g. vLLM, llama.cpp, koboldcpp) will allow users to work with the inference tooling they are most comfortable with, or best suits their needs.
Solution
Add support for other inference tools into the chat interface, and alter the Docker setup to allow the inference tools to be run inside the Docker container, not just via a bridge to the host's machine.
The text was updated successfully, but these errors were encountered:
Issue
Setup currently supports only Ollama for local LLM inferencing. Expanding support to other local LLM inferencing tools (e.g. vLLM, llama.cpp, koboldcpp) will allow users to work with the inference tooling they are most comfortable with, or best suits their needs.
Solution
Add support for other inference tools into the chat interface, and alter the Docker setup to allow the inference tools to be run inside the Docker container, not just via a bridge to the host's machine.
The text was updated successfully, but these errors were encountered: