Skip to content

Commit 7e9f7d1

Browse files
Byte-LabIngo Molnar
authored andcommitted
sched/fair: Simplify the update_sd_pick_busiest() logic
When comparing the current struct sched_group with the yet-busiest domain in update_sd_pick_busiest(), if the two groups have the same group type, we're currently doing a bit of unnecessary work for any group >= group_misfit_task. We're comparing the two groups, and then returning only if false (the group in question is not the busiest). Otherwise, we break out, do an extra unnecessary conditional check that's vacuously false for any group type > group_fully_busy, and then always return true. Let's just return directly in the switch statement instead. This doesn't change the size of vmlinux with llvm 17 (not surprising given that all of this is inlined in load_balance()), but it does shrink load_balance() by 88 bytes on x86. Given that it also improves readability, this seems worth doing. Signed-off-by: David Vernet <void@manifault.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Reviewed-by: Valentin Schneider <vschneid@redhat.com> Link: https://lore.kernel.org/r/20240206043921.850302-4-void@manifault.com
1 parent 7f1a722 commit 7e9f7d1

File tree

1 file changed

+3
-9
lines changed

1 file changed

+3
-9
lines changed

kernel/sched/fair.c

Lines changed: 3 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -10010,9 +10010,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
1001010010
switch (sgs->group_type) {
1001110011
case group_overloaded:
1001210012
/* Select the overloaded group with highest avg_load. */
10013-
if (sgs->avg_load <= busiest->avg_load)
10014-
return false;
10015-
break;
10013+
return sgs->avg_load > busiest->avg_load;
1001610014

1001710015
case group_imbalanced:
1001810016
/*
@@ -10023,18 +10021,14 @@ static bool update_sd_pick_busiest(struct lb_env *env,
1002310021

1002410022
case group_asym_packing:
1002510023
/* Prefer to move from lowest priority CPU's work */
10026-
if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
10027-
return false;
10028-
break;
10024+
return sched_asym_prefer(sds->busiest->asym_prefer_cpu, sg->asym_prefer_cpu);
1002910025

1003010026
case group_misfit_task:
1003110027
/*
1003210028
* If we have more than one misfit sg go with the biggest
1003310029
* misfit.
1003410030
*/
10035-
if (sgs->group_misfit_task_load <= busiest->group_misfit_task_load)
10036-
return false;
10037-
break;
10031+
return sgs->group_misfit_task_load > busiest->group_misfit_task_load;
1003810032

1003910033
case group_smt_balance:
1004010034
/*

0 commit comments

Comments
 (0)