Skip to content

Conversation

shiyan1121
Copy link

Context

This PR enhances the Torchtune framework by introducing support for the Intel XPU backend.

Changelog

To avoid the redundant hard-code for XPU logic and simplify the device abstraction, we follows the implementation in PyTorch FSDP:
Making fsdp device-agnostic for custom-backend which implement cuda-semantics #99024

In the torchtune/utils/_device.py file, we introduce the _DeviceHandle class. Users can initialize this class with their device by using device_handle = _DeviceHandle.from_device(my_device).  Since XPU also follows the CUDA semantics and provides CUDA-like APIs, e.g. torch.xpu.empty_cache(), most torch.cuda. semantics could be replaced by device_handle., which are available both for CUDA and XPU, and even for other CUDA-like devices.

For some logics that the device_handle strategy cannot be implemented, we resort to using XPU related judgement to handle different scenarios.

Copy link

pytorch-bot bot commented Aug 7, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/torchtune/1280

Note: Links to docs will display an error until the docs builds have been completed.

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link

Hi @HJG971121!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks!

@ebsmothers
Copy link
Contributor

Thanks @HJG971121 for the PR. Overall I think the _DeviceHandle abstraction is pretty nice.

One very basic question is around the usage of intel_extension_for_pytorch and oneccl_bindings_for_pytorch, are these just defining all the necessary bindings to the XPU backend? And if so, do they need to be imported globally, or can we put them in a utility somewhere?

And a second question: does torchtune run on XPU with the changes in this PR? It'd be nice to add some sample runs confirming things run and have reasonable performance.

@shiyan1121
Copy link
Author

shiyan1121 commented Aug 13, 2024

Hi, @ebsmothers, thank you for your response.

Regarding your first question, both intel_extension_for_pytorch and oneccl_bindings_for_pytorch are essential for our XPU backend. I have incorporated them into utils for global import to ensure they are readily accessible throughout the project.

For your second question, we have successfully tested the Torchtune functionality with the llama2-7b and llama3-8b models on our XPU. Attached is the log for fine-tuning the llama2-7b model. Could you advise if it would be appropriate to include our recipe in the current PR?
llama2-7b_single_device_lora_bs_4_seqlen_2048_test_data_pack.log

@shiyan1121
Copy link
Author

Here is our example of command line for llama2-7b full finetuning on single device, where the device is specified as XPU:

tune run full_finetune_single_device --config llama2/7B_full_low_memory device=xpu

@facebook-github-bot
Copy link

Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks!

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Aug 21, 2024
@RdoubleA RdoubleA mentioned this pull request Aug 21, 2024
Copy link
Collaborator

@RdoubleA RdoubleA left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like the idea of the DeviceHandle a lot. Had a few questions around its usage.

param_init_fn=(
lambda module: module.to_empty(
device=torch.device("cuda"), recurse=False
device=torch.device("xpu") if torch.xpu.is_available() else torch.device("cuda"), recurse=False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If device_handle is serving as an intermediary wrapper to handle all device related APIs agnostic of whether that is CPU, CUDA, or XPU, shouldn't we work with device_handle directly here instead of torch.device? My understanding is that we only need to find the appropriate device one time in the recipe (via get_device), then call get_device_handle, and use the device_handle for all these operations. Or am I misunderstanding?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your comment!
Your understanding is correct,and it's my mistake that I did't use unified device handle in this instance.

Additionally, I suggest that code here could be rewritten as
module.to_empty(device = self._device.type, recurse=False)
to avoid redundant device verification, given that the device attribute is already specified.

)

init_process_group(backend="gloo" if cfg.device == "cpu" else "nccl")
if torch.xpu.is_available():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment here

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for pointing that out!

Indeed, there are scenarios where the device_handle might not be applicable, for example backend needs to be determined based on the device type.

And I think using cfg.device as the condition variable may be a simpler way. I will rewrite the code, thank you.

self._device.type == "cuda"
and self._log_peak_memory_stats
):
if self._log_peak_memory_stats:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want to guard against CPU here? get_memory_stats will throw an error but maybe we shouldn't log at all if on CPU. (actually, none of our recipes support CPU training, so maybe we should just let it error out, @ebsmothers)

"""
This is a simple abstraction for computing devices,
which enables custom backends that implement CUDA-like
semantics.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you provide a few more details here on how this should be used? MY understanding is that this is used in place of device. A few examples would be helpful.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This device_handleis designed to eliminate redundant code when implementing CUDA-like APIs for your customized device. For instance, consider the XPU. To empty the cache, a simplistic approach involves using conditional statements:

torch.xpu.empty_cache() if device == 'xpu' else torch.cuda.empty_cache()

This method, however, lacks efficiency and flexibility, particularly as the number of GPU devices increases in the future. Therefore, we adopt the device_handle strategy to integrate both CUDA APIs and the CUDA-like APIs of the customized device.

To utilize this strategy, you should first define your device_handle based on your device:

device_handle = get_device_handle (device) # device could be ‘cuda’, ‘xpu’ or your customed device

Then, the code for emptying the cache can be rewritten as:

device_handle.empty_cache()

In this setup, device_handle replaces direct calls to torch.cuda or torch.xpu, streamlining the process across different device types. Commonly used CUDA APIs, such as torch.cuda.memory_allocated() and torch.cuda.synchronize(), can be utilized in the same manner.

return cast(_DeviceHandle, torch.cuda)
return cls(device)

def __getattr__(self, __name: str) -> Any:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a quick docstring here would be helpful, something along the lines of "retrieve attribute from correct backend based on registered device"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your suggestion! I will add docstring to it.

torch.cuda.set_device(device)
device_handle = get_device_handle(device)
if device.index >= device_handle.device_count():
raise RuntimeError(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: extra tab here

@shiyan1121 shiyan1121 marked this pull request as draft August 29, 2024 06:12
@zhuhong61
Copy link

Convert to Draft. We are working on preparing the RFC for the discussion about the Intel PyTorch distributed design. We will push forward the process after reaching an alignment :)

@Ankur-singh
Copy link
Contributor

Convert to Draft. We are working on preparing the RFC for the discussion about the Intel PyTorch distributed design. We will push forward the process after reaching an alignment :)

Hi @zhuhong61 any updates on this?

@songhappy
Copy link
Contributor

@shiyan1121 Could you close this one please? New PRs have been added according to new code to support xpu. see #1953, #1826, #2249

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants