Skip to content

Commit a8730e7

Browse files
authored
[Doc] update quantization docs with QwQ-32B-W8A8 example (#835)
1. replace deepseek-v2-lite model with more pratical model QwQ 32B 2. fix some incorrect commands 3. replase modelslim version with a more formal tag Signed-off-by: 22dimensions <waitingwind@foxmail.com>
1 parent 7326644 commit a8730e7

File tree

1 file changed

+16
-22
lines changed

1 file changed

+16
-22
lines changed

docs/source/tutorials/multi_npu_quantization.md

Lines changed: 16 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Multi-NPU (deepseek-v2-lite-w8a8)
1+
# Multi-NPU (QwQ 32B W8A8)
22

33
## Run docker container:
44
:::{note}
@@ -31,60 +31,54 @@ docker run --rm \
3131
## Install modelslim and convert model
3232
:::{note}
3333
You can choose to convert the model yourself or use the quantized model we uploaded,
34-
see https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V2-Lite-w8a8
34+
see https://www.modelscope.cn/models/vllm-ascend/QwQ-32B-W8A8
3535
:::
3636

3737
```bash
38-
git clone https://gitee.com/ascend/msit
38+
# (Optional)This tag is recommended and has been verified
39+
git clone https://gitee.com/ascend/msit -b modelslim-VLLM-8.1.RC1.b020
3940

40-
# (Optional)This commit has been verified
41-
git checkout a396750f930e3bd2b8aa13730401dcbb4bc684ca
4241
cd msit/msmodelslim
4342
# Install by run this script
4443
bash install.sh
4544
pip install accelerate
4645

47-
cd /msit/msmodelslim/example/DeepSeek
46+
cd example/Qwen
4847
# Original weight path, Replace with your local model path
49-
MODEL_PATH=/home/weight/DeepSeek-V2-Lite
48+
MODEL_PATH=/home/models/QwQ-32B
5049
# Path to save converted weight, Replace with your local path
51-
SAVE_PATH=/home/weight/DeepSeek-V2-Lite-w8a8
52-
mkdir -p $SAVE_PATH
50+
SAVE_PATH=/home/models/QwQ-32B-w8a8
51+
5352
# In this conversion process, the npu device is not must, you can also set --device_type cpu to have a conversion
54-
python3 quant_deepseek.py --model_path $MODEL_PATH --save_directory $SAVE_PATH --device_type npu --act_method 2 --w_bit 8 --a_bit 8 --is_dynamic True
53+
python3 quant_qwen.py --model_path $MODEL_PATH --save_directory $SAVE_PATH --calib_file ../common/boolq.jsonl --w_bit 8 --a_bit 8 --device_type npu --anti_method m1 --trust_remote_code True
5554
```
5655

5756
## Verify the quantized model
5857
The converted model files looks like:
5958
```bash
6059
.
6160
|-- config.json
62-
|-- configuration_deepseek.py
63-
|-- fusion_result.json
61+
|-- configuration.json
6462
|-- generation_config.json
65-
|-- quant_model_description_w8a8_dynamic.json
66-
|-- quant_model_weight_w8a8_dynamic-00001-of-00004.safetensors
67-
|-- quant_model_weight_w8a8_dynamic-00002-of-00004.safetensors
68-
|-- quant_model_weight_w8a8_dynamic-00003-of-00004.safetensors
69-
|-- quant_model_weight_w8a8_dynamic-00004-of-00004.safetensors
70-
|-- quant_model_weight_w8a8_dynamic.safetensors.index.json
71-
|-- tokenization_deepseek_fast.py
63+
|-- quant_model_description.json
64+
|-- quant_model_weight_w8a8.safetensors
65+
|-- README.md
7266
|-- tokenizer.json
7367
`-- tokenizer_config.json
7468
```
7569

7670
Run the following script to start the vLLM server with quantize model:
7771
```bash
78-
vllm serve /home/weight/DeepSeek-V2-Lite-w8a8 --tensor-parallel-size 4 --trust-remote-code --served-model-name "dpsk-w8a8" --max-model-len 4096
72+
vllm serve /home/models/QwQ-32B-w8a8 --tensor-parallel-size 4 --served-model-name "qwq-32b-w8a8" --max-model-len 4096 --quantization ascend
7973
```
8074

8175
Once your server is started, you can query the model with input prompts
8276
```bash
8377
curl http://localhost:8000/v1/completions \
8478
-H "Content-Type: application/json" \
8579
-d '{
86-
"model": "dpsk-w8a8",
87-
"prompt": "what is deepseek?",
80+
"model": "qwq-32b-w8a8",
81+
"prompt": "what is large language model?",
8882
"max_tokens": "128",
8983
"top_p": "0.95",
9084
"top_k": "40",

0 commit comments

Comments
 (0)