docs/how_to/local_llms/ #27637
Replies: 3 comments
-
Nice guide! However, I wonder if there's a way to use a generic LLM locally, and not just the ones reported in the guide here. |
Beta Was this translation helpful? Give feedback.
-
I found, to create an OllamaLLM, u must add a 'base_url' parameter. e.g., ``llm = OllamaLLM(model="llama3.1:8b", base_url="http://localhost:11434"), otherwise there will be a ConnectError: [WinError 10049] exception be thrown. |
Beta Was this translation helpful? Give feedback.
-
It would be nice to understand how to consume a model already being served, rather than running/invoking the model directly within a Python service. Through these examples, it looks like the model must run directly from the Python code, rather than just accessing a model via OpenAI compatible REST API endpoints (presumably being exposed on |
Beta Was this translation helpful? Give feedback.
-
docs/how_to/local_llms/
Use case
https://python.langchain.com/docs/how_to/local_llms/
Beta Was this translation helpful? Give feedback.
All reactions