Skip to content

Commit 13c9969

Browse files
committed
fix format
Signed-off-by: Bill Nell <bnell@redhat.com>
1 parent eba18b2 commit 13c9969

File tree

1 file changed

+3
-3
lines changed
  • vllm/model_executor/layers/fused_moe

1 file changed

+3
-3
lines changed

vllm/model_executor/layers/fused_moe/layer.py

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,9 @@
2323
from vllm.forward_context import ForwardContext, get_forward_context
2424
from vllm.logger import init_logger
2525
from vllm.model_executor.custom_op import CustomOp
26-
from .modular_kernel import (FusedMoEModularKernel,
27-
FusedMoEPermuteExpertsUnpermute,
28-
FusedMoEPrepareAndFinalize)
26+
from .modular_kernel import (FusedMoEModularKernel,
27+
FusedMoEPermuteExpertsUnpermute,
28+
FusedMoEPrepareAndFinalize)
2929
from vllm.model_executor.layers.fused_moe.rocm_aiter_fused_moe import (
3030
is_rocm_aiter_moe_enabled)
3131
from vllm.model_executor.layers.quantization.base_config import (

0 commit comments

Comments
 (0)