Question About Fine-Tuning LLAMA.cpp #3271
-
Beta Was this translation helpful? Give feedback.
Answered by
ianscrivener
Sep 19, 2023
Replies: 1 comment
-
Yes, the llama LLM model is definately fine-tuneable with (1) prompt tuning, (2) Lora, and (3) custom fine-tuned llama models Yes, the llama.cpp inference engine is fine-tuneable in so far that it is a C++ library/API. While many people use example apps such as |
Beta Was this translation helpful? Give feedback.
0 replies
Answer selected by
Green-Sky
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Yes, the llama LLM model is definately fine-tuneable with (1) prompt tuning, (2) Lora, and (3) custom fine-tuned llama models
Yes, the llama.cpp inference engine is fine-tuneable in so far that it is a C++ library/API. While many people use example apps such as
./main
and ./server
as-is... the overall intention is that llama.cpp is a C++ library that C++ developers can tweak and adapt to the required use case. In fact, at least one of the core llama.cpp dev has expressed that they are "available for hire".