Releases: bartowski1182/llama.cpp
Releases · bartowski1182/llama.cpp
b2943
b2940
Merge branch 'ggerganov:master' into master
b2937
Add Smaug tokenizer support
b2936
ggml: implement quantized KV cache for FA (#7372)
Merge branch 'ggerganov:master' into master
Add Smaug tokenizer support
ggml: implement quantized KV cache for FA (#7372)