Skip to content

Conversation

@briangallagher
Copy link
Contributor

What this PR does / why we need it:
Update the examples to reflect the change to how parameters are passed to the training function.
The dictionary is now unpacked when calling the training function. See PR

Which issue(s) this PR fixes (optional, in Fixes #<issue number>, #<issue number>, ... format, will close the issue(s) when PR gets merged):
Fixes #2801

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@coveralls
Copy link

coveralls commented Sep 4, 2025

Pull Request Test Coverage Report for Build 17529688495

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 52.136%

Totals Coverage Status
Change from base Build 17469034215: 0.0%
Covered Lines: 1025
Relevant Lines: 1966

💛 - Coveralls

Copy link
Member

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @briangallagher!

"outputs": [],
"source": [
"def deepspeed_train_t5(args):\n",
"def deepspeed_train_t5(NUM_SAMPLES, MODEL_NAME, BUCKET):\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add the types for them please.

Suggested change
"def deepspeed_train_t5(NUM_SAMPLES, MODEL_NAME, BUCKET):\n",
"def deepspeed_train_t5(num_samples: str, model_name: str, bucket: str):\n",

"outputs": [],
"source": [
"def mlx_train_mnist(args):\n",
"def mlx_train_mnist(BUCKET, MODEL):\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"def mlx_train_mnist(BUCKET, MODEL):\n",
"def mlx_train_mnist(bucket: str, model: str):\n",

"outputs": [],
"source": [
"def fine_tune_llama(func_args):\n",
"def fine_tune_llama(HF_TOKEN, NUM_SAMPLES, BATCH_SIZE):\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"def fine_tune_llama(HF_TOKEN, NUM_SAMPLES, BATCH_SIZE):\n",
"def fine_tune_llama(hf_token: str, num_samples: str, batch_size: str):\n",

"outputs": [],
"source": [
"def train_distilbert(args):\n",
"def train_distilbert(BUCKET, MODEL_NAME):\n",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"def train_distilbert(BUCKET, MODEL_NAME):\n",
"def train_distilbert(bucket: str = None, model_name: str):\n",

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@andreyvelich Changes pushed

@briangallagher briangallagher force-pushed the update-examples-with-unpacking-params branch 2 times, most recently from b046202 to e92dbc3 Compare September 6, 2025 15:53
@briangallagher briangallagher changed the title update examples to reflect func_args now being unpacked fix: update examples to reflect func_args now being unpacked Sep 6, 2025
Copy link
Member

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@briangallagher Can you rebase your PR to test GPU example as well ?

Signed-off-by: Brian Gallagher <briangal@gmail.com>
@briangallagher briangallagher force-pushed the update-examples-with-unpacking-params branch from e92dbc3 to 36b163a Compare September 7, 2025 13:15
Signed-off-by: Brian Gallagher <briangal@gmail.com>
@briangallagher briangallagher force-pushed the update-examples-with-unpacking-params branch from 36b163a to dd8f34d Compare September 7, 2025 14:15
Copy link
Member

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @briangallagher!
/lgtm
/approve

@google-oss-prow
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: andreyvelich

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@google-oss-prow google-oss-prow bot merged commit 0f485ed into kubeflow:master Sep 7, 2025
25 of 27 checks passed
@google-oss-prow google-oss-prow bot added this to the v2.1 milestone Sep 7, 2025
astefanutti pushed a commit to astefanutti/training-operator that referenced this pull request Sep 24, 2025
…w#2815)

* update examples to reflect func_args now being unpacked

Signed-off-by: Brian Gallagher <briangal@gmail.com>

* fix: lower case func arguments and add types

Signed-off-by: Brian Gallagher <briangal@gmail.com>

---------

Signed-off-by: Brian Gallagher <briangal@gmail.com>
google-oss-prow bot pushed a commit that referenced this pull request Sep 24, 2025
…acked (#2815) (#2853)

* update examples to reflect func_args now being unpacked



* fix: lower case func arguments and add types



---------

Signed-off-by: Brian Gallagher <briangal@gmail.com>
Co-authored-by: Brian Gallagher <briangal@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Update Trainer Examples to use param unpacking in the training function call

3 participants