Skip to content

Commit 94f10b9

Browse files
authored
readme : update hot tpoics
1 parent b3e9852 commit 94f10b9

File tree

1 file changed

+2
-14
lines changed

1 file changed

+2
-14
lines changed

README.md

Lines changed: 2 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -11,21 +11,9 @@ Inference of [LLaMA](https://arxiv.org/abs/2302.13971) model in pure C/C++
1111

1212
### Hot topics
1313

14-
- #### IMPORTANT: Tokenizer fixes and API change (developers and projects using `llama.cpp` built-in tokenization must read): https://github.com/ggerganov/llama.cpp/pull/2810
14+
- Local Falcon 180B inference on Mac Studio
1515

16-
- GGUFv2 adds support for 64-bit sizes + backwards compatible: https://github.com/ggerganov/llama.cpp/pull/2821
17-
18-
- Added support for Falcon models: https://github.com/ggerganov/llama.cpp/pull/2717
19-
20-
- A new file format has been introduced: [GGUF](https://github.com/ggerganov/llama.cpp/pull/2398)
21-
22-
Last revision compatible with the old format: [dadbed9](https://github.com/ggerganov/llama.cpp/commit/dadbed99e65252d79f81101a392d0d6497b86caa)
23-
24-
### Current `master` should be considered in Beta - expect some issues for a few days!
25-
26-
### Be prepared to re-convert and / or re-quantize your GGUF models while this notice is up!
27-
28-
### Issues with non-GGUF models will be considered with low priority!
16+
https://github.com/ggerganov/llama.cpp/assets/1991296/98abd4e8-7077-464c-ae89-aebabca7757e
2917

3018
----
3119

0 commit comments

Comments
 (0)