You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone,
I'm experiencing fine tunning LLMs using Ludwig, it's going great but I'm at a bloking point.
Ludwigs fine-tuning use Lora and output an adapter.
I would love to output a gguf file for serving purposes and that's when I stumbled upon llama.cpp, but I don't really know which converter to use, every time I receive a FileNotFoundError such as:
FileNotFoundError: Can't find model in directory ../<my_project>/results/experiment_run
I'm not sure to do everything right, I tried merging it to the base model but it didn't work any better.
I'm not even sure if llama.cpp is the right tool for such a usecase.
Thx in advance.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I'm experiencing fine tunning LLMs using Ludwig, it's going great but I'm at a bloking point.
Ludwigs fine-tuning use Lora and output an adapter.
I would love to output a gguf file for serving purposes and that's when I stumbled upon llama.cpp, but I don't really know which converter to use, every time I receive a FileNotFoundError such as:
FileNotFoundError: Can't find model in directory ../<my_project>/results/experiment_run
I'm not sure to do everything right, I tried merging it to the base model but it didn't work any better.
I'm not even sure if llama.cpp is the right tool for such a usecase.
Thx in advance.
Beta Was this translation helpful? Give feedback.
All reactions