Using Metal with an Intel chip on macOS #38
Closed
authcompanion
started this conversation in
General
Replies: 1 comment 3 replies
-
@authcompanion Do you have an Apple silicone chip or an Intel chip? |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello 👋 ,
Having some issues getting going for the first time and could use some assistance, please see below:
Reproduction:
npm install --save node-llama-cpp
inference.js
+ updated the model path with my gguf model.node inference.js
Error:
llama_new_context_with_model: ggml_metal_init() failed
npx node-llama-cpp chat -m models/wizardlm-1.0-uncensored-llama2-13b.Q5_K_M.gguf
fails with the same error too
Model Name: MacBook Pro
ProductName: macOS
ProductVersion: 13.3.1
node: v18.16.0
node-llama-cpp: 2.4.0
inference.js:
Beta Was this translation helpful? Give feedback.
All reactions