Replies: 1 comment
-
Hey, yes probably it's possible but there is not really technical support on an individual basis from me or any contributors, if you find a way to achieve it please share for others. Many thanks! |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Have setup Twinny on my LinuxPC and used with Qwen-2.5-7B-Coder model (running on CPU at about 8-10 tk/s) via Ollama, as I don't have a DGPU. Wondering if Twinny could be configured to use Mistral's hosted public Codestral-22B inference service for FIM and Anthropic/Gemini/ChatGPT/Groq/Cerebras free (hosted inference) for chat etc. ? If, how should the provider configuration look like ?
Beta Was this translation helpful? Give feedback.
All reactions