Replies: 2 comments
-
When you compile ( or download ) llama.cpp with gpu support, you compile ( or download ) it for a particular gpu family, so unless you have the same kind of igpu and dgpu, it will already only see one of them, depending on gpu target it was compiled for. Though typically igpus aren't capable of compute the same way dgpus are, it really depends on the particular hardware model. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hmm, so the igpu is a 680M, and the dgpu is a 6600M, they are both from AMD. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Sorry if this sort of question isn't allowed here!
I'm getting a decent gaming mini-pc with a dgpu and an igpu. Igpu might be slow but offers the possibility of much more ram for things like quantization. Can you select a graphics card between two options for things like the quantize module?
Beta Was this translation helpful? Give feedback.
All reactions