Skip to content

Commit 1f0b9a2

Browse files
authored
fix : Missing LoRA adapter after API change (#1630)
1 parent 8a12c9f commit 1f0b9a2

File tree

1 file changed

+6
-3
lines changed

1 file changed

+6
-3
lines changed

llama_cpp/llama.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2083,11 +2083,14 @@ def pooling_type(self) -> str:
20832083

20842084
def close(self) -> None:
20852085
"""Explicitly free the model from memory."""
2086-
self._stack.close()
2086+
if hasattr(self,'_stack'):
2087+
if self._stack is not None:
2088+
self._stack.close()
20872089

20882090
def __del__(self) -> None:
2089-
if self._lora_adapter is not None:
2090-
llama_cpp.llama_lora_adapter_free(self._lora_adapter)
2091+
if hasattr(self,'_lora_adapter'):
2092+
if self._lora_adapter is not None:
2093+
llama_cpp.llama_lora_adapter_free(self._lora_adapter)
20912094
self.close()
20922095

20932096
@staticmethod

0 commit comments

Comments
 (0)