OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with ollama pull qwen:14b
.
#20913
-
Checked other resources
Commit to Help
Example Codeollama list
NAME ID SIZE MODIFIED
llama2:13b 990f930d55c5 7.4 GB 7 days ago
qwen:14b 80362ced6553 8.2 GB 6 days ago
The model is already running.
I successfully called using the following method.
`client = OpenAI(
base_url = 'http://ip:11434/v1',
api_key='ollama', # required, but unused
)
response = client.chat.completions.create(
model="llama2:13b",
messages=[
# {"role": "system", "content": "You are a helpful assistant."},
# {"role": "user", "content": "Who won the world series in 2020?"},
# {"role": "assistant", "content": "The LA Dodgers won in 2020."},
{"role": "user", "content": "how are you"}
]
)
print(response.choices[0].message.content)`
But when calling olama in langchain, an error was reported.
`from langchain_community.llms.ollama import Ollama
import json
base_url = 'http://ip:11434/'
api_key='ollama'
# model="llama2:13b"
model="qwen:14b"
chat_model = Ollama(model=model,base_url=base_url)
response = chat_model("how are you?")
print(response)`
The error is as follows:
OllamaEndpointNotFoundError Traceback (most recent call last)
Cell In[3], [line 15](vscode-notebook-cell:?execution_count=3&line=15)
[11](vscode-notebook-cell:?execution_count=3&line=11) # chat_model = ChatOpenAI(model_name=model,openai_api_key=api_key,openai_api_base=base_url,temperature=0.3)
[12](vscode-notebook-cell:?execution_count=3&line=12) # chat_model = OpenAI(model_name=model,openai_api_key=api_key,openai_api_base=base_url,temperature=0.3)
[13](vscode-notebook-cell:?execution_count=3&line=13) chat_model = Ollama(model=model,base_url=base_url)
---> [15](vscode-notebook-cell:?execution_count=3&line=15) response = chat_model("how are you")
[16](vscode-notebook-cell:?execution_count=3&line=16) print(response)
File [~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:148](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:148), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
[146](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:146) warned = True
[147](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:147) emit_warning()
--> [148](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:148) return wrapped(*args, **kwargs)
File [~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1086](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1086), in BaseLLM.__call__(self, prompt, stop, callbacks, tags, metadata, **kwargs)
[1079](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1079) if not isinstance(prompt, str):
[1080](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1080) raise ValueError(
[1081](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1081) "Argument `prompt` is expected to be a string. Instead found "
[1082](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1082) f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
[1083](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1083) "`generate` instead."
[1084](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1084) )
[1085](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1085) return (
-> [1086](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1086) self.generate(
[1087](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1087) [prompt],
[1088](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_core/language_models/llms.py:1088) stop=stop,
...
[248](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_community/llms/ollama.py:248) )
[249](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_community/llms/ollama.py:249) else:
[250](https://vscode-remote+ssh-002dremote-002bjumpser-005fml-005f145.vscode-resource.vscode-cdn.net/home/lproot/ml/lanchain/~/.conda/envs/openai/lib/python3.10/site-packages/langchain_community/llms/ollama.py:250) optional_detail = response.text
OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull qwen:14b`.
Is there a way to solve it,thank you DescriptionI call olama llm server (openai API) using langchain,unsuccessful System InfoLangchain Version: 0.1.16 |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 2 replies
-
To address the
These steps should help you resolve the issue and successfully call the
|
Beta Was this translation helpful? Give feedback.
-
I have a very similar issue with the following code. This works:
This does not:
|
Beta Was this translation helpful? Give feedback.
you solved my problem, many thanks. I tried the following code and it works.