Skip to content

Commit 131766d

Browse files
committed
readme : add warnings about breaking changes [no ci]
1 parent 5d4c807 commit 131766d

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,11 @@
1111
Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others) in pure C/C++
1212

1313
> [!IMPORTANT]
14-
[2024 Jun 12] Binaries have been renamed w/ a `llama-` prefix. `main` is now `llama-cli`, `server` is `llama-server`, etc (https://github.com/ggerganov/llama.cpp/pull/7809)
14+
[2024 Aug 31] Breaking changes to the C-style sampling API: https://github.com/ggerganov/llama.cpp/pull/8643
1515

1616
## Recent API changes
1717

18+
- [2024 Aug 31] Refactored `llama_sample` and `llama_grammar` APIs: https://github.com/ggerganov/llama.cpp/pull/8643
1819
- [2024 Jun 26] The source code and CMake build scripts have been restructured https://github.com/ggerganov/llama.cpp/pull/8006
1920
- [2024 Apr 21] `llama_token_to_piece` can now optionally render special tokens https://github.com/ggerganov/llama.cpp/pull/6807
2021
- [2024 Apr 4] State and session file functions reorganized under `llama_state_*` https://github.com/ggerganov/llama.cpp/pull/6341
@@ -26,7 +27,7 @@ Inference of Meta's [LLaMA](https://arxiv.org/abs/2302.13971) model (and others)
2627

2728
## Hot topics
2829

29-
- **`convert.py` has been deprecated and moved to `examples/convert_legacy_llama.py`, please use `convert_hf_to_gguf.py`** https://github.com/ggerganov/llama.cpp/pull/7430
30+
- `convert.py` has been deprecated and moved to `examples/convert_legacy_llama.py`, please use `convert_hf_to_gguf.py` https://github.com/ggerganov/llama.cpp/pull/7430
3031
- Initial Flash-Attention support: https://github.com/ggerganov/llama.cpp/pull/5021
3132
- BPE pre-tokenization support has been added: https://github.com/ggerganov/llama.cpp/pull/6920
3233
- MoE memory layout has been updated - reconvert models for `mmap` support and regenerate `imatrix` https://github.com/ggerganov/llama.cpp/pull/6387

0 commit comments

Comments
 (0)