-
Couldn't load subscription status.
- Fork 835
fix: update examples to reflect func_args now being unpacked #2815
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: update examples to reflect func_args now being unpacked #2815
Conversation
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Pull Request Test Coverage Report for Build 17529688495Details
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @briangallagher!
| "outputs": [], | ||
| "source": [ | ||
| "def deepspeed_train_t5(args):\n", | ||
| "def deepspeed_train_t5(NUM_SAMPLES, MODEL_NAME, BUCKET):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the types for them please.
| "def deepspeed_train_t5(NUM_SAMPLES, MODEL_NAME, BUCKET):\n", | |
| "def deepspeed_train_t5(num_samples: str, model_name: str, bucket: str):\n", |
| "outputs": [], | ||
| "source": [ | ||
| "def mlx_train_mnist(args):\n", | ||
| "def mlx_train_mnist(BUCKET, MODEL):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "def mlx_train_mnist(BUCKET, MODEL):\n", | |
| "def mlx_train_mnist(bucket: str, model: str):\n", |
| "outputs": [], | ||
| "source": [ | ||
| "def fine_tune_llama(func_args):\n", | ||
| "def fine_tune_llama(HF_TOKEN, NUM_SAMPLES, BATCH_SIZE):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "def fine_tune_llama(HF_TOKEN, NUM_SAMPLES, BATCH_SIZE):\n", | |
| "def fine_tune_llama(hf_token: str, num_samples: str, batch_size: str):\n", |
| "outputs": [], | ||
| "source": [ | ||
| "def train_distilbert(args):\n", | ||
| "def train_distilbert(BUCKET, MODEL_NAME):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "def train_distilbert(BUCKET, MODEL_NAME):\n", | |
| "def train_distilbert(bucket: str = None, model_name: str):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@andreyvelich Changes pushed
b046202 to
e92dbc3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@briangallagher Can you rebase your PR to test GPU example as well ?
Signed-off-by: Brian Gallagher <briangal@gmail.com>
e92dbc3 to
36b163a
Compare
Signed-off-by: Brian Gallagher <briangal@gmail.com>
36b163a to
dd8f34d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @briangallagher!
/lgtm
/approve
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andreyvelich The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…w#2815) * update examples to reflect func_args now being unpacked Signed-off-by: Brian Gallagher <briangal@gmail.com> * fix: lower case func arguments and add types Signed-off-by: Brian Gallagher <briangal@gmail.com> --------- Signed-off-by: Brian Gallagher <briangal@gmail.com>
What this PR does / why we need it:
Update the examples to reflect the change to how parameters are passed to the training function.
The dictionary is now unpacked when calling the training function. See PR
Which issue(s) this PR fixes (optional, in
Fixes #<issue number>, #<issue number>, ...format, will close the issue(s) when PR gets merged):Fixes #2801