Skip to content

Commit 70e98e8

Browse files
committed
docs: add new models and notes, order by ELO score
1 parent f0d52aa commit 70e98e8

File tree

1 file changed

+9
-7
lines changed

1 file changed

+9
-7
lines changed

README.md

Lines changed: 9 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -41,10 +41,12 @@ default, these will download the `_Q5_K_M.gguf` versions of the models. These
4141
models are quantized to 5 bits which provide a good balance between speed and
4242
accuracy.
4343

44-
| Target | Model | Model Size | (V)RAM Required | [~Score](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | [~ELO](https://chat.lmsys.org/?leaderboard) |
45-
| --- | --- | --- | --- | --- | --- |
46-
| llama-2-13b | [`llama-2-13b-chat`](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF) | 13B | 11.73 GB | 53.26 | 1053 |
47-
| solar-10b | [`solar-10.7b-instruct-v1.0`](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF) | 10.7B | 10.10 GB | 74.2 | 1065 |
48-
| mistral-7b | [`mistral-7b-instruct-v0.2`](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF) | 7B | 7.63 GB | 65.71 | 1074 |
49-
| starling-7b | [`starling-lm-7b-beta`](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF) | 7B | 5.13 GB | 69.88 | 1118 |
50-
| command-r | [`c4ai-command-r-v01`](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF) | 35B | 25.00 GB | 68.54 | 1148 |
44+
| Target | Model | Model Size | (V)RAM Required | [~Score](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) | [~ELO](https://chat.lmsys.org/?leaderboard) | Notes |
45+
| --- | --- | --- | --- | --- | --- | --- |
46+
| llama-2-13b | [`llama-2-13b-chat`](https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF) | 13B | 11.73 GB | 55.69 | 1067 | The original open LLM |
47+
| solar-10b | [`solar-10.7b-instruct-v1.0`](https://huggingface.co/TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF) | 10.7B | 10.10 GB | 74.20 | 1066 | Scores highly on Huggingface |
48+
| phi-3-mini | [`phi-3-mini`](https://huggingface.co/bartowski/Phi-3-mini-4k-instruct-GGUF) | 3B | 3.13 GB | 69.91 | 1074 | The current best tiny model |
49+
| mistral-7b | [`mistral-7b-instruct-v0.2`](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF) | 7B | 7.63 GB | 65.71 | 1074 | The most popular 7B model |
50+
| starling-7b | [`starling-lm-7b-beta`](https://huggingface.co/bartowski/Starling-LM-7B-beta-GGUF) | 7B | 5.13 GB | 69.88 | 1118 | Scores highly on Huggingface |
51+
| command-r | [`c4ai-command-r-v01`](https://huggingface.co/bartowski/c4ai-command-r-v01-GGUF) | 35B | 25.00 GB | 68.54 | 1149 | The current best medium model |
52+
| llama-3-8b | [`meta-llama-3-8b-instruct`](https://huggingface.co/bartowski/Meta-Llama-3-8B-Instruct-GGUF) | 8B | 5.73 GB | 62.55 | 1154 | The current best small model |

0 commit comments

Comments
 (0)