You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-19Lines changed: 3 additions & 19 deletions
Original file line number
Diff line number
Diff line change
@@ -265,27 +265,11 @@ The [Hugging Face](https://huggingface.co) platform hosts a [number of LLMs](htt
265
265
266
266
You can either manually download the GGUF file or directly use any `llama.cpp`-compatible models from Hugging Face by using this CLI argument: `-hf <user>/<model>[:quant]`
267
267
268
-
llama.cpp also supports downloading and running models from [ModelScope](https://www.modelscope.cn/home), there are two ways to use models in ModelScope:
268
+
Altenatively, model can be fetched from [ModelScope](https://www.modelscope.cn) with CLI argument of `-ms <user>/<model>[:quant]`, for example, `llama-cli -ms Qwen/QwQ-32B-GGUF`. You may find models on ModelScope compatible with `llama.cpp` through:
269
269
270
-
1. Add an env variable: `LLAMACPP_USE_MODELSCOPE=True` to your command with the same arguments of Hugging Face(like `-hf <user>/<model>[:quant]`).
2. Use modelscope arguments instead of the ones of Hugging Face: `-ms <user>/<model>[:quant] -msf xxx.gguf -mst xxx_token`
277
-
278
-
```shell
279
-
llama-cli -ms Qwen/QwQ-32B-GGUF
280
-
```
281
-
282
-
Pay attention to change the model repo to the **existing repo** of ModelScope. If you want to use a private repo, please make sure you have the rights of the repo and run with the `--ms_token` argument:
283
-
284
-
```shell
285
-
llama-cli -ms Qwen/QwQ-32B-GGUF --ms_token xxx
286
-
```
287
-
288
-
> You can change the endpoint of ModelScope by using `MODELSCOPE_DOMAIN=xxx`(like MODELSCOPE_DOMAIN=www.modelscope.ai).
272
+
> You can change the download endpoint of ModelScope by using `MODELSCOPE_DOMAIN=xxx`(like MODELSCOPE_DOMAIN=www.modelscope.ai).
289
273
290
274
After downloading a model, use the CLI tools to run it locally - see below.
0 commit comments