Skip to content

Commit bbf0ce6

Browse files
committed
[Bugfix] graph batch size round up to tp size, when enable expert parallel
Signed-off-by: liziyu <liziyu16@huawei.com>
1 parent 3ea2410 commit bbf0ce6

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

vllm_ascend/worker/model_runner_v1.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2101,7 +2101,9 @@ def check_torchair_graph_batch_sizes(self):
21012101
if self.parallel_config.enable_expert_parallel:
21022102
new_graph_batch_sizes = []
21032103
for graph_batch_size in self.torchair_graph_batch_sizes:
2104-
cur_graph_batch_size = graph_batch_size + tp_size - graph_batch_size % tp_size
2105-
if cur_graph_batch_size not in new_graph_batch_sizes:
2104+
cur_graph_batch_size = (graph_batch_size + tp_size -
2105+
1) // tp_size * tp_size
2106+
if cur_graph_batch_size not in new_graph_batch_sizes and \
2107+
cur_graph_batch_size <= self.scheduler_config.max_num_batched_tokens:
21062108
new_graph_batch_sizes.append(cur_graph_batch_size)
21072109
self.torchair_graph_batch_sizes = new_graph_batch_sizes

0 commit comments

Comments
 (0)