Skip to content

Commit 45be1aa

Browse files
authored
[CI] Add codespell check for doc (vllm-project#1314)
Add codespell check test for doc only PR Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
1 parent 761bd3d commit 45be1aa

File tree

2 files changed

+37
-4
lines changed

2 files changed

+37
-4
lines changed

.github/doc_codespell.yaml

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,33 @@
1+
2+
name: 'doc-codespell'
3+
4+
on:
5+
pull_request:
6+
branches:
7+
- 'main'
8+
- '*-dev'
9+
paths:
10+
- 'docs/**'
11+
12+
jobs:
13+
codespell:
14+
runs-on: ubuntu-latest
15+
strategy:
16+
matrix:
17+
python-version: ["3.10"]
18+
steps:
19+
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
20+
- name: Set up Python ${{ matrix.python-version }}
21+
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
22+
with:
23+
python-version: ${{ matrix.python-version }}
24+
- name: Install dependencies
25+
run: |
26+
python -m pip install --upgrade pip
27+
pip install -r requirements-lint.txt
28+
- name: Run codespell check
29+
run: |
30+
CODESPELL_EXCLUDES=('--skip' 'tests/prompts/**,./benchmarks/sonnet.txt,*tests/lora/data/**,build/**,./vllm_ascend.egg-info/**')
31+
CODESPELL_IGNORE_WORDS=('-L' 'CANN,cann,NNAL,nnal,ASCEND,ascend,EnQue,CopyIn')
32+
33+
codespell --toml pyproject.toml "${CODESPELL_EXCLUDES[@]}" "${CODESPELL_IGNORE_WORDS[@]}"

docs/source/user_guide/quantization.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ Since 0.9.0rc2 version, quantization feature is experimentally supported in vLLM
88

99
To quantize a model, users should install [ModelSlim](https://gitee.com/ascend/msit/blob/master/msmodelslim/README.md) which is the Ascend compression and acceleration tool. It is an affinity-based compression tool designed for acceleration, using compression as its core technology and built upon the Ascend platform.
1010

11-
Currently, only the specific tag [modelslim-VLLM-8.1.RC1.b020_001](https://gitee.com/ascend/msit/blob/modelslim-VLLM-8.1.RC1.b020_001/msmodelslim/README.md) of modelslim works with vLLM Ascend. Please do not install other version until modelslim master version is avaliable for vLLM Ascend in the future.
11+
Currently, only the specific tag [modelslim-VLLM-8.1.RC1.b020_001](https://gitee.com/ascend/msit/blob/modelslim-VLLM-8.1.RC1.b020_001/msmodelslim/README.md) of modelslim works with vLLM Ascend. Please do not install other version until modelslim master version is available for vLLM Ascend in the future.
1212

1313
Install modelslim:
1414
```bash
@@ -34,7 +34,7 @@ You can also download the quantized model that we uploaded. Please note that the
3434

3535
Once convert action is done, there are two important files generated.
3636

37-
1. [confg.json](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V2-Lite-W8A8/file/view/master/config.json?status=1). Please make sure that there is no `quantization_config` field in it.
37+
1. [config.json](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V2-Lite-W8A8/file/view/master/config.json?status=1). Please make sure that there is no `quantization_config` field in it.
3838

3939
2. [quant_model_description.json](https://www.modelscope.cn/models/vllm-ascend/DeepSeek-V2-Lite-W8A8/file/view/master/quant_model_description.json?status=1). All the converted weights info are recorded in this file.
4040

@@ -77,7 +77,7 @@ sampling_params = SamplingParams(temperature=0.6, top_p=0.95, top_k=40)
7777
llm = LLM(model="{quantized_model_save_path}",
7878
max_model_len=2048,
7979
trust_remote_code=True,
80-
# Enable quantization by specifing `quantization="ascend"`
80+
# Enable quantization by specifying `quantization="ascend"`
8181
quantization="ascend")
8282

8383
outputs = llm.generate(prompts, sampling_params)
@@ -90,7 +90,7 @@ for output in outputs:
9090
### Online inference
9191

9292
```bash
93-
# Enable quantization by specifing `--quantization ascend`
93+
# Enable quantization by specifying `--quantization ascend`
9494
vllm serve {quantized_model_save_path} --served-model-name "deepseek-v2-lite-w8a8" --max-model-len 2048 --quantization ascend --trust-remote-code
9595
```
9696

0 commit comments

Comments
 (0)