it's better to support ollama #379
Replies: 2 comments 2 replies
-
|
hey as mentioned here, I will not be putting time in to support multiple model providers. These will have to come from the community and they will be use at your own discretion :) We have had successful contributions for Azure OpenAI and an ongoing one for Deepseek. Ollama has an added complexity as you would mostly likely need GPUs to run reasonable models in which case using Mac runners with MLX backend or external Nvidia (which i cannot test) is prefered. |
Beta Was this translation helpful? Give feedback.
-
|
Hi, I believe this is related. I was able to download qwen2.5-coder LLM model on locally running ollama and if this was the question than.. configuration actually allows to support that setup. I was successfully able to change args.ts configuration to make it work! .option('modelString', { |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Do you have a plan to support ollama? thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions