Skip to content

Commit 4447e53

Browse files
authored
[Doc] Change not to no in faqs.md (#1357)
### What this PR does / why we need it? Change not to no in faqs.md. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Local Test Signed-off-by: xleoken <xleoken@163.com>
1 parent a95afc0 commit 4447e53

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/faqs.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ Currently, w8a8 quantization is already supported by vllm-ascend originally on v
8686

8787
Please following the [quantization inferencing tutorail](https://vllm-ascend.readthedocs.io/en/main/tutorials/multi_npu_quantization.html) and replace model to DeepSeek.
8888

89-
### 12. There is not output in log when loading models using vllm-ascend, How to solve it?
89+
### 12. There is no output in log when loading models using vllm-ascend, How to solve it?
9090

9191
If you're using vllm 0.7.3 version, this is a known progress bar display issue in VLLM, which has been resolved in [this PR](https://github.com/vllm-project/vllm/pull/12428), please cherry-pick it locally by yourself. Otherwise, please fill up an issue.
9292

0 commit comments

Comments
 (0)