Skip to content

Commit 2fc95a6

Browse files
committed
fix codecheck
Signed-off-by: David9857 <985700846@qq.com>
1 parent d7f8be5 commit 2fc95a6

File tree

1 file changed

+9
-11
lines changed

1 file changed

+9
-11
lines changed

vllm_ascend/ops/fused_moe.py

Lines changed: 9 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -39,17 +39,15 @@
3939
USING_LCCL_COM: bool = envs_ascend.USING_LCCL_COM
4040

4141

42-
def fused_experts_with_mc2(
43-
hidden_states: torch.Tensor,
44-
w1: torch.Tensor,
45-
w2: torch.Tensor,
46-
topk_weights: torch.Tensor,
47-
topk_ids: torch.Tensor,
48-
top_k: int,
49-
expert_map: torch.Tensor = None,
50-
moe_all_to_all_group_name: Optional[str] = None,
51-
**kwargs
52-
) -> torch.Tensor:
42+
def fused_experts_with_mc2(hidden_states: torch.Tensor,
43+
w1: torch.Tensor,
44+
w2: torch.Tensor,
45+
topk_weights: torch.Tensor,
46+
topk_ids: torch.Tensor,
47+
top_k: int,
48+
expert_map: torch.Tensor = None,
49+
moe_all_to_all_group_name: Optional[str] = None,
50+
**kwargs) -> torch.Tensor:
5351
global_bs = 0
5452
moe_expert_num = len(expert_map)
5553
kwargs_mc2 = {

0 commit comments

Comments
 (0)