Skip to content

Commit 76ddeff

Browse files
authored
[Doc] Remove duplicate docstring (#21012)
Signed-off-by: yewentao256 <zhyanwentao@126.com>
1 parent f460983 commit 76ddeff

File tree

1 file changed

+0
-2
lines changed

1 file changed

+0
-2
lines changed

vllm/model_executor/layers/quantization/utils/fp8_utils.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -378,8 +378,6 @@ def per_token_group_quant_fp8(
378378
is supported for now.
379379
column_major_scales: Outputs scales in column major.
380380
out_q: Optional output tensor. If not provided, function will create.
381-
tuple[torch.Tensor, torch.Tensor]: The quantized tensor and the
382-
scaling factor for quantization.
383381
Returns:
384382
tuple[torch.Tensor, torch.Tensor]: The quantized tensor and the
385383
scaling factor.

0 commit comments

Comments
 (0)