Replies: 1 comment
-
I tried this one, and $ ./main -m ./models/gpt4all-7B/ggml-model-q4_0.bin -t 8 -n 512 -p 'send http request in golang'
main: build = 917 (1a94186)
main: seed = 1690515134
llama.cpp: loading model from ./models/gpt4all-7B/ggml-model-q4_0.bin
error loading model: unexpectedly reached end of file
llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model './models/gpt4all-7B/ggml-model-q4_0.bin'
main: error: unable to load model |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
What is the "added_tokens.json" file required to use GPT4ALL? The instructions say to use "one from Alpaca". But there are many many different projects based on Alpaca which have that in the name, and many different variations of "added_tokens.json". I am seeing mostly blank versions of such a file whenever I look for one. I have no idea which file I actually need. Can someone provide the exact file, or at least the SHA256 hash of the necessary file?
For reference, my SHA-256 of GPT4ALL's model is "05c9dc0a4904f3b232cffe717091b0b0a8246f49c3f253208fbf342ed79a6122 *gpt4all-lora-quantized.bin"
Beta Was this translation helpful? Give feedback.
All reactions