Skip to content

Commit f6e34dd

Browse files
authored
Merge pull request #903 from yixing1992/main
Update README.md for Huawei Ascend NPU support modes
2 parents 4cc6253 + e975062 commit f6e34dd

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -235,7 +235,7 @@ DeepSeek-V3 can be deployed locally using the following hardware and open-source
235235
5. **vLLM**: Support DeepSeek-V3 model with FP8 and BF16 modes for tensor parallelism and pipeline parallelism.
236236
6. **LightLLM**: Supports efficient single-node or multi-node deployment for FP8 and BF16.
237237
7. **AMD GPU**: Enables running the DeepSeek-V3 model on AMD GPUs via SGLang in both BF16 and FP8 modes.
238-
8. **Huawei Ascend NPU**: Supports running DeepSeek-V3 on Huawei Ascend devices.
238+
8. **Huawei Ascend NPU**: Supports running DeepSeek-V3 on Huawei Ascend devices in both INT8 and BF16.
239239

240240
Since FP8 training is natively adopted in our framework, we only provide FP8 weights. If you require BF16 weights for experimentation, you can use the provided conversion script to perform the transformation.
241241

0 commit comments

Comments
 (0)