Skip to content

Commit d748de7

Browse files
committed
Document how to use remote server with Ollama
1 parent 4dae4eb commit d748de7

File tree

3 files changed

+6
-1
lines changed

3 files changed

+6
-1
lines changed

docs/source/user_guide_rag.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,10 @@ it can be queried using the following:
223223
.. code:: python
224224
225225
from neo4j_graphrag.llm import OllamaLLM
226-
llm = OllamaLLM(model_name="orca-mini")
226+
llm = OllamaLLM(
227+
model_name="orca-mini",
228+
# host="...", # when using a remote server
229+
)
227230
llm.invoke("say something")
228231
229232

examples/customize/embeddings/ollama_embeddings.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66

77
embeder = OllamaEmbeddings(
88
model="<model_name>",
9+
# host="...", # if using a remote server
910
)
1011
res = embeder.embed_query("my question")
1112
print(res[:10])

examples/customize/llms/ollama_llm.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66

77
llm = OllamaLLM(
88
model_name="<model_name>",
9+
# host="...", # if using a remote server
910
)
1011
res: LLMResponse = llm.invoke("What is the additive color model?")
1112
print(res.content)

0 commit comments

Comments
 (0)