Replies: 3 comments
-
same error in Windows embedding: local: openai: |
Beta Was this translation helpful? Give feedback.
-
Same issue any guidance is appreciated |
Beta Was this translation helpful? Give feedback.
-
This is for Windows 11 conda create -n privategpt python=3.11 Install Ollama on windows. After installation stop Ollama server in Folder privateGPT and Env privategpt Here the file settings-ollama.yaml: llm: embedding: ollama: vectorstore: qdrant: This should similar run for Docker. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
My local installation on WSL2 stopped working all of a sudden yesterday. It was working fine and without any changes, it suddenly started throwing StopAsyncIteration exceptions.
Since setting everything up from scratch, including WSL2 made no difference, I realised my only option is to run it in Docker.
Being new to Docker, I installed Docker Desktop and was expecting it to be a seamless process using the provided Dockerfile. However, executing the Dockerfile directly results in missing model error.
So I tried to set it up using "docker compose up" instead, hoping it will pick up the model from the local folder but it doesn't. In addition, I get the error below.
Can anyone provide a simple guide as to how to get privateGPT working as a docker image?
Beta Was this translation helpful? Give feedback.
All reactions