You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the device_list.py file resolves devices like the following:
import torch
gpu_list = []
for i in range(torch.cuda.device_count()):
device_name = torch.cuda.get_device_properties(i).name
gpu_list.append(device_name)
print(gpu_list)
As far as I can see the code doesn't rely on pure-CUDA features, so using the accelerator api would mean being able use other accelerators automatically assuming the right version of pytorch is installed.
Currently, ROCm also gets enumerated by the CUDA-api but Intel GPUs do not. For Intel GPUs it might(?) be necessary to use XPU variants for things like memory stats if I read the code correctly, though the API seems the same. Maybe during initialization it would work to select the appropriate type based on GPU usage?