JORA: Jax library for distributed LLM fine-tuning #20405
Unanswered
aniquetahir
asked this question in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We created this library for distributed fine-tuning of Llama-based LLMs (more planned to be added). We reduce the lines of code for simple fine-tuning significantly compared to HuggingFace and we also observe 12x improvement in performance compared to HuggingFace/Deepspeed. Memory consumption per GPU is also reduced. Check it out here:
https://github.com/aniquetahir/JORA
Beta Was this translation helpful? Give feedback.
All reactions