Skip to content

Seems that all models are loaded to the first GPU? #11

@DDDOH

Description

@DDDOH

I use four A100 run python -m prover.launch --config=configs/RMaxTS.py --log_dir=logs/RMaxTS_results
But always get cuda out of memory error.

I added some print message:
image

Then i get
image

All models are tried to load in first GPU. Is this expected? Or maybe pytorch behavior is different for new version (I am using '2.5.1+cu121', and the version in requirement.txt is 2.2.1)

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions