How does ggml load the llama weights into the memory? #7871
Unanswered
mr-mapache
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there! I want to contribute to the project and I'm trying to understand the code, while I'm having a grasp on how the tensors and the computational graph are working, I don't understand how are the llama pretrained weights are uploaded to the ggml tensors, I cannot find any source explaining this.
Beta Was this translation helpful? Give feedback.
All reactions