-
Hello, are there and procedure available for checking if the ggml will work on gpu or spotted a gpu device? I was using Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Answered by
Emreerdog
Dec 2, 2024
Replies: 1 comment 1 reply
-
Thankfully I found it. It seems that when loading the model using
|
Beta Was this translation helpful? Give feedback.
1 reply
Answer selected by
Emreerdog
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thankfully I found it. It seems that when loading the model using
llama_load_model_from_file
it is iterating through devices. By this, I can see all devices: