Replies: 7 comments
-
Yes, the candle version is having problem running ColPali. Can you please try OnnX inference. It's lighter and better. |
Beta Was this translation helpful? Give feedback.
-
Thanks for your reply, but ONNX version only run with CPU, I need to
evaluate it with GPU
Juan Carlos Rodriguez
El dom, 10 ago 2025 a las 7:16, sonam_genai ***@***.***>)
escribió:
… *sonam-pankaj95* left a comment (StarlightSearch/EmbedAnything#163)
<#163 (comment)>
Yes, the candle version is having problem running ColPali. Can you please
try OnnX inference. It's lighter and better.
—
Reply to this email directly, view it on GitHub
<#163 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACEMZKD47TMELVSLMLZK7PL3M4SZZAVCNFSM6AAAAACDQUTMBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTCNZSGU2TKMRSHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Ok then, let me check what I can do. |
Beta Was this translation helpful? Give feedback.
-
Hi, so I looked into it. We can't release candle version for GPU support yet, because it also needs to be released on Metal and we don't have access to it right now. But I can tell you, we have ONNX support on GPU, so if you just want to evaluate on GPU, you can go ahead with it. Thanks. |
Beta Was this translation helpful? Give feedback.
-
Good afternoon.
The Colpali ONNX version only runs on the CPU when I run it according to
the online instructions.
Can you tell me where I can find information on how the ONNX version runs
on the GPU?
Thank you very much for your response.
El lun, 11 ago 2025 a las 15:24, sonam_genai ***@***.***>)
escribió:
… *sonam-pankaj95* left a comment (StarlightSearch/EmbedAnything#163)
<#163 (comment)>
Hi, so I looked into it. We can't release candle version for GPU support
yet, because it also needs to be released on Metal and we don't have access
to it right now.
But I can tell you, we have ONNX support on GPU, so if you just want to
evaluate on GPU, you can go ahead with it.
Thanks.
—
Reply to this email directly, view it on GitHub
<#163 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACEMZKBZTBNDWUS5KOAU4MT3NDUYRAVCNFSM6AAAAACDQUTMBWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZTCNZWGU2TKNBTHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hey @quirogaco , thanks for the question. You can use the ColPali-Onnx notebook, given here. Just remember, if you haven't installed Torch anywhere, you might need to install cudaNN, which has already been given in the example. You can also try the Colab file we have with a GPU. https://github.com/StarlightSearch/EmbedAnything/blob/main/examples/notebooks/colpali.ipynb Colab file: https://colab.research.google.com/drive/1CowJrqZxDDYJzkclI-rbHaZHgL9C6K3p?usp=sharing |
Beta Was this translation helpful? Give feedback.
-
Hey, I hope it was useful. LMK if you need any other help. Please don't forget to support us and give us a star. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Wsl2 with Ubuntu
CUDA:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Tue_Oct_29_23:50:19_PDT_2024
Cuda compilation tools, release 12.6, V12.6.85
Build cuda_12.6.r12.6/compiler.35059454_0
Python:
Python 3.11.13 | packaged by conda-forge | (main, Jun 4 2025, 14:48:23) [GCC 13.3.0] on linux
model: ColpaliModel = ColpaliModel.from_pretrained("vidore/colpali-v1.3-merged")
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Traceback (most recent call last):
File "/projects/colpali/embed-anything/test.py", line 19, in
model: ColpaliModel = ColpaliModel.from_pretrained("vidore/colpali-v1.3-merged")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: DriverError(CUDA_ERROR_NOT_FOUND, "named symbol not found")
Beta Was this translation helpful? Give feedback.
All reactions