-
I need use Agenta completely on local server with a local LLM platform ollama, how can do that? |
Beta Was this translation helpful? Give feedback.
Replies: 5 comments 4 replies
-
Hey @kmx2! I'm here to help you out with any issues or questions you have. Let's see how we can integrate Agenta with the local LLM platform Ollama. To integrate Agenta with the Ollama LLM platform and use it completely on a local server, follow these steps:
By following these steps, you can integrate Agenta with the Ollama LLM platform and use it entirely on a local server [1][2][3]. |
Beta Was this translation helpful? Give feedback.
-
Hey @kmx2 , This should be simple to do. You need to create a custom application that calls your local ollama endpoint. You can see an example on how to create such a custom application here: https://docs.agenta.ai/guides/tutorials/deploy-mistral-model |
Beta Was this translation helpful? Give feedback.
-
Hey @mmabrouk , |
Beta Was this translation helpful? Give feedback.
-
What's the status of this? I would love to see support for Ollama in the model playground! Ollama has quickly become one of the most popular LLM inference frameworks. Adding an Ollama provider would open up new possibilities for the many developers who run models locally. |
Beta Was this translation helpful? Give feedback.
-
Ollama, bedrock and azure and any custom openai API model are now supported in Agenta starting v0.37.0 |
Beta Was this translation helpful? Give feedback.
Hey @kmx2 ,
This should be simple to do. You need to create a custom application that calls your local ollama endpoint. You can see an example on how to create such a custom application here: https://docs.agenta.ai/guides/tutorials/deploy-mistral-model
In this case we are calling Mistral in Huggingface, but you can do exactly the same by calling Ollama locally.