Replies: 1 comment
-
I dont know why the github account banned so much recently. Do you know why? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Dear all,
using the latest release of privateGPT, I get the following error when using intfloat/multilingual-e5-base for the embeddings.
CUDA error 710 at /tmp/pip-install-281klxnb/llama-cpp-python_cf43f0d85af444668d34ca5d9953a909/vendor/llama.cpp/ggml-cuda.cu:7656: device-side assert triggered
current device: 0
This does not happen when using the same model in the previous release. Does anyone have an idea what I would need to change to get rid of this? My guess would be that the chunking works differently.
Btw: This only seems to happen for large Text Files. ~2000 tokens and more. When I use intfloat/multilingual-e5-small, it all runs smoothly.
Hardware: RTX 3060 with 12 GB RAM. In the previous release, even using the large model, only uses about 4GB.
Cheers Nada
Beta Was this translation helpful? Give feedback.
All reactions