Skip to content

Commit fe13cd9

Browse files
authored
[Doc] update faq about w8a8 (#534)
update faq about w8a8 --------- Signed-off-by: Mengqing Cao <cmq0113@163.com>
1 parent 415ed02 commit fe13cd9

File tree

1 file changed

+12
-0
lines changed

1 file changed

+12
-0
lines changed

docs/source/faqs.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -101,3 +101,15 @@ vllm-ascend is a plugin for vllm. Basically, the version of vllm-ascend is the s
101101
### 10. Does vllm-ascend support Prefill Disaggregation feature?
102102

103103
Currently, only 1P1D is supported by vllm. For vllm-ascend, it'll be done by [this PR](https://github.com/vllm-project/vllm-ascend/pull/432). For NPND, vllm is not stable and fully supported yet. We will make it stable and supported by vllm-ascend in the future.
104+
105+
### 11. Does vllm-ascend support quantization method?
106+
107+
Currently, there is no quantization method supported in vllm-ascend originally. And the quantization supported is working in progress, w8a8 will firstly be supported.
108+
109+
### 12. How to run w8a8 DeepSeek model?
110+
111+
Currently, running on v0.7.3, we should run w8a8 with vllm + vllm-ascend + mindie-turbo. And we only need vllm + vllm-ascend when v0.8.X is released. After installing the above packages, you can follow the steps below to run w8a8 DeepSeek:
112+
113+
1. Quantize bf16 DeepSeek, e.g. [unsloth/DeepSeek-R1-BF16](https://modelscope.cn/models/unsloth/DeepSeek-R1-BF16), with msModelSlim to get w8a8 DeepSeek. Find more details in [msModelSlim doc](https://gitee.com/ascend/msit/tree/master/msmodelslim/msmodelslim/pytorch/llm_ptq)
114+
2. Copy the content of `quant_model_description_w8a8_dynamic.json` into the `quantization_config` of `config.json` of the quantized model files.
115+
3. Reference with the quantized DeepSeek model.

0 commit comments

Comments
 (0)