You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -99,7 +100,9 @@ class InvokeAIAppConfig(BaseSettings):
99
100
profile_prefix: An optional prefix for profile output files.
100
101
profiles_dir: Path to profiles output directory.
101
102
ram: Maximum memory amount used by memory model cache for rapid switching (GB).
103
+
vram: Amount of VRAM reserved for model storage (GB).
102
104
convert_cache: Maximum size of on-disk converted models cache (GB).
105
+
lazy_offload: Keep models in VRAM until their space is needed.
103
106
log_memory_usage: If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.
104
107
device: Preferred execution device. `auto` will choose the device depending on the hardware platform and the installed torch capabilities.<br>Valid values: `auto`, `cpu`, `cuda:0`, `cuda:1`, `cuda:2`, `cuda:3`, `cuda:4`, `cuda:5`, `cuda:6`, `cuda:7`, `mps`
105
108
devices: List of execution devices; will override default device selected.
@@ -167,7 +170,9 @@ class InvokeAIAppConfig(BaseSettings):
167
170
168
171
# CACHE
169
172
ram: float=Field(default_factory=get_default_ram_cache_size, gt=0, description="Maximum memory amount used by memory model cache for rapid switching (GB).")
173
+
vram: float=Field(default=DEFAULT_VRAM_CACHE, ge=0, description="Amount of VRAM reserved for model storage (GB).")
lazy_offload: bool=Field(default=True, description="Keep models in VRAM until their space is needed.")
171
176
log_memory_usage: bool=Field(default=False, description="If True, a memory snapshot will be captured before and after every model cache operation, and the result will be logged (at debug level). There is a time cost to capturing the memory snapshots, so it is recommended to only enable this feature if you are actively inspecting the model cache's behaviour.")
0 commit comments