Skip to content

Commit 9323db9

Browse files
committed
fix: Remove llama-kv-cache.* and llava.*
The kv cache hierarchy was squashed so that now all of the llama-kv-cache-* implementations inherit directly from llama_memory_i and there is no intermediary llama_kv_cache base class. ggml-org/llama.cpp#14006 The llava.* tool files were migrated to mtmd.* files ggml-org/llama.cpp#13460 Branch: GraniteFour Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
1 parent 841c137 commit 9323db9

File tree

4 files changed

+0
-3504
lines changed

4 files changed

+0
-3504
lines changed

0 commit comments

Comments
 (0)