Skip to content
This repository was archived by the owner on Jun 21, 2024. It is now read-only.
This repository was archived by the owner on Jun 21, 2024. It is now read-only.

Inference on CPU or MPS(Arm based Mac) ? #3

@Pawandeep-prog

Description

@Pawandeep-prog

Is there any workaround for running inference on CPU or my arm based Mac M1.
Currently trying to run on Mac m1 and I am getting the following error

 /Users/pawandeepsingh/Documents/Development/llm/PaLM/inference.py:50 in main 
 ❱ 50 │   model = torch.hub.load("conceptofmind/PaLM", args.model).to(device).to(dtype)  

RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False.
If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') 
to map your storages to the CPU.

Thanks.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions