Replies: 1 comment
-
There are two ways to do this:
In my experience, setting |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
How can I run llama cpp on specific GPU (I have few GPUs on my PC), both for main and server executables?
Beta Was this translation helpful? Give feedback.
All reactions