-
Notifications
You must be signed in to change notification settings - Fork 186
Open
Description
$ export SF3D_USE_CPU=1
$ python3.10 main.py --novram --fast fp8_matrix_mult cublas_ops fp16_accumulation --disable-smart-memory
Total VRAM 4031 MB, total RAM 32036 MB
pytorch version: 2.7.0+cu126
Enabled fp16 accumulation.
Set vram state to: NO_VRAM
Disabling smart memory management
Device: cuda:0 Quadro M3000M : native
Checkpoint files will always be loaded safely.
Using pytorch attention
...
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 3.94 GiB of which 6.62 MiB is free
Metadata
Metadata
Assignees
Labels
No labels