Skip to content

Commit 956e7bf

Browse files
committed
fix: Remove llama-kv-cache.*
The kv cache hierarchy was squashed so that now all of the llama-kv-cache-* implementations inherit directly from llama_memory_i and there is no intermediary llama_kv_cache base class. ggml-org/llama.cpp#14006 Branch: GraniteFour Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
1 parent 841c137 commit 956e7bf

File tree

2 files changed

+0
-2864
lines changed

2 files changed

+0
-2864
lines changed

0 commit comments

Comments
 (0)