Fine-Tuning LLM with MacBook Pro M4 MAX #1560
softshipper
started this conversation in
General
Replies: 1 comment 8 replies
-
It really depends on what you are trying to do. You can LoRA fine tune that model on a M1 with just 32Gb on a small to medium dataset. If you're dataset is larger, or has very long sequences, or you want to do full fine-tuning (train all the params, not low rank) you will probably want more RAM so 128GB will be better. |
Beta Was this translation helpful? Give feedback.
8 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone,
I have a question regarding the capability of the MacBook Pro M4 MAX with 128 GB RAM for fine-tuning large language model. Specifically, is this system sufficient to fine-tune LLaMA 3.2 with 3 billion parameters and using MLX?
Best regards
Beta Was this translation helpful? Give feedback.
All reactions