Skip to content

Conversation

aishwaryaraimule21
Copy link

@aishwaryaraimule21 aishwaryaraimule21 commented Feb 5, 2025

What this PR does / why we need it:

This PR adds a distributed training example where a Llama model is finetuned on Yelp dataset using a Kubeflow Pipeline.

@review-notebook-app
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Copy link
Member

@andreyvelich andreyvelich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this effort @aishwaryaraimule21!
I am fine with merging this KFP example.
Any thoughts @johnugeorge @tenzen-y @Electronic-Waste @astefanutti ?

" )\n",
" \n",
" # check the status of the job\n",
" from kubeflow.pytorchjob import PyTorchJobClient\n",
Copy link
Member

@andreyvelich andreyvelich Feb 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should you use TrainingClient here ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the PR. Now using TrainingClient().get_job_conditions() to fetch the job status.

@Electronic-Waste
Copy link
Member

I have no objections:)

@google-oss-prow
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign terrytangyuan for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@tenzen-y
Copy link
Member

Thank you for this effort @aishwaryaraimule21! I am fine with merging this KFP example. Any thoughts @johnugeorge @tenzen-y @Electronic-Waste @astefanutti ?

In that case, what are the relationship Training examples in KFP repository something like https://github.com/kubeflow/pipelines/tree/472f8779ded18f8904c5cbe15c0573d461d57af5/components/kubeflow/pytorch-launcher?

@andreyvelich
Copy link
Member

Thank you for this effort @aishwaryaraimule21! I am fine with merging this KFP example. Any thoughts @johnugeorge @tenzen-y @Electronic-Waste @astefanutti ?

In that case, what are the relationship Training examples in KFP repository something like https://github.com/kubeflow/pipelines/tree/472f8779ded18f8904c5cbe15c0573d461d57af5/components/kubeflow/pytorch-launcher?

I think, you can use PyTorch launcher or you can directly use kubeflow-training SDK in the lightweight KFP component.
It is up to the user to decide.

@tenzen-y
Copy link
Member

tenzen-y commented Feb 15, 2025

Thank you for this effort @aishwaryaraimule21! I am fine with merging this KFP example. Any thoughts @johnugeorge @tenzen-y @Electronic-Waste @astefanutti ?

In that case, what are the relationship Training examples in KFP repository something like https://github.com/kubeflow/pipelines/tree/472f8779ded18f8904c5cbe15c0573d461d57af5/components/kubeflow/pytorch-launcher?

I think, you can use PyTorch launcher or you can directly use kubeflow-training SDK in the lightweight KFP component. It is up to the user to decide.

SGTM.
It would be great if we could provide comprehensive examples after we release the consolidated SDK (I know the first version of SDK will be contained only katib and trainer features).

@andreyvelich
Copy link
Member

@aishwaryaraimule21 Can you sign the DCO please ?

Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
@aishwaryaraimule21
Copy link
Author

@andreyvelich I have signed the DCO. Please check. Thanks.

"\n",
"In this component, use TrainingClient() to create PyTorchJob which will fine-tune Llama3 model on 1 worker with 1 GPU.\n",
"\n",
"Specify the required packages in the *dsl.component* decorator. We would need kubeflow-pytorchjob, kubeflow-training[huggingface] and numpy packages in this Kubeflow component.\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is kubeflow-pytorchjob really necessary since TrainingClient is used now?

"metadata": {},
"outputs": [],
"source": [
"@dsl.component(packages_to_install=['kubeflow-pytorchjob', 'kubeflow-training[huggingface]','numpy<1.24'])\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dito, is kubeflow-pytorchjob really necessary since TrainingClient is used now?

" ),\n",
" # it is assumed for text related tasks, you have 'text' column in the dataset.\n",
" # for more info on how dataset is loaded check load_and_preprocess_data function in sdk/python/kubeflow/trainer/hf_llm_training.py\n",
" dataset_provider_parameters=HuggingFaceDatasetParams(repo_id=\"aishwaryayyy/events_data\"),\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be better to remove dependencies on user specific repository.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

replaced the user specific repository with https://huggingface.co/datasets/Yelp/yelp_review_full

" name=\"llama-3-1-8b-kubecon\",\n",
" num_workers=1,\n",
" num_procs_per_worker=1,\n",
" # specify the storage class if you don't want to use the default one for the storage-initializer PVC\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be useful to mention a provisioner capable of provisioning RWX PVC is needed when distributing the training on multiple nodes / workers.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

" \"storage_class\": \"nfs-storage\",\n",
" },\n",
" model_provider_parameters=HuggingFaceModelParams(\n",
" model_uri=\"hf://meta-llama/Llama-3.1-8B-Instruct\",\n",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we cover the distributed training case, and provide the configuration so the model does not get downloaded on each local node / worker?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done, covered the distributed training case using 2 workers.

@coveralls
Copy link

Pull Request Test Coverage Report for Build 13375853453

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 100.0%

Totals Coverage Status
Change from base Build 13314191840: 0.0%
Covered Lines: 85
Relevant Lines: 85

💛 - Coveralls

@andreyvelich
Copy link
Member

Hi @aishwaryaraimule21, did you get a chance to address the @astefanutti feedback, so we can merge this example to the release-1.9 branch ?

Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
… output paths

Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
…fine-tuned model

Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
…orkers

Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
…inetune_model component

Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
@aishwaryaraimule21
Copy link
Author

aishwaryaraimule21 commented Jul 19, 2025

@andreyvelich I have tested the distributed training workflow using an older trainer image of release-1.9 branch.
With the latest trainer package, I am running into an OOM error for the same TrainingArgs and hardware setup.
I see the transformers package in the release-1.9 branch got upgraded from 4.38.0 to 4.50.2 during this time.
f58e893#diff-3bbef68e7a1f42b8d4d1ef6f0%5B%E2%80%A6%5D9a1adeb7a3ec7e2a0f4d153c79276R3

Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
Signed-off-by: aishwarya.raimule <aishwarya.raimule@nutanix.com>
@andreyvelich
Copy link
Member

@andreyvelich I have tested the distributed training workflow using an older trainer image of release-1.9 branch. With the latest trainer package, I am running into an OOM error for the same TrainingArgs and hardware setup. I see the transformers package in the release-1.9 branch got upgraded from 4.38.0 to 4.50.2 during this time. f58e893#diff-3bbef68e7a1f42b8d4d1ef6f0%5B%E2%80%A6%5D9a1adeb7a3ec7e2a0f4d153c79276R3

Do you want to try to update other packages and try it again @aishwaryaraimule21 ?

@aishwaryaraimule21
Copy link
Author

aishwaryaraimule21 commented Aug 17, 2025

Do you want to try to update other packages and try it again @aishwaryaraimule21 ?

Yes, @andreyvelich. Let me try updating other packages. I tried running this example for smaller models like SmolLM2-135M-Instruct. The training is successful but they fail at the subsequent steps. I haven't gotten a chance to debug this. I will try to fix it this week. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants