We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent c39905e commit 4452308Copy full SHA for 4452308
docs/develop/rust/wasinn/llm_inference.md
@@ -65,7 +65,7 @@ The output WASM file is `target/wasm32-wasi/release/llama-chat.wasm`.
65
We also need to get the model. Here we use the llama-2-13b model.
66
67
```bash
68
-curl -LO https://huggingface.co/wasmedge/llama2/blob/main/llama-2-13b-q5_k_m.gguf
+curl -LO https://huggingface.co/wasmedge/llama2/blob/main/llama-2-13b-chat-q5_k_m.gguf
69
```
70
71
Next, use WasmEdge to load the llama-2-13b model and then ask the model to questions.
0 commit comments