Skip to content

Use PyTorch's torch.accelerator instead of torch.cuda to provide support for non-NVIDIA-GPUs #72

@Lolle2000la

Description

@Lolle2000la

Currently the device_list.py file resolves devices like the following:

import torch

gpu_list = []
for i in range(torch.cuda.device_count()):
    device_name = torch.cuda.get_device_properties(i).name
    gpu_list.append(device_name)

print(gpu_list)

As far as I can see the code doesn't rely on pure-CUDA features, so using the accelerator api would mean being able use other accelerators automatically assuming the right version of pytorch is installed.

Currently, ROCm also gets enumerated by the CUDA-api but Intel GPUs do not. For Intel GPUs it might(?) be necessary to use XPU variants for things like memory stats if I read the code correctly, though the API seems the same. Maybe during initialization it would work to select the appropriate type based on GPU usage?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions