Skip to content

Commit 4fca50d

Browse files
jankaratytso
authored andcommitted
ext4: make mballoc try target group first even with mb_optimize_scan
One of the side-effects of mb_optimize_scan was that the optimized functions to select next group to try were called even before we tried the goal group. As a result we no longer allocate files close to corresponding inodes as well as we don't try to expand currently allocated extent in the same group. This results in reaim regression with workfile.disk workload of upto 8% with many clients on my test machine: baseline mb_optimize_scan Hmean disk-1 2114.16 ( 0.00%) 2099.37 ( -0.70%) Hmean disk-41 87794.43 ( 0.00%) 83787.47 * -4.56%* Hmean disk-81 148170.73 ( 0.00%) 135527.05 * -8.53%* Hmean disk-121 177506.11 ( 0.00%) 166284.93 * -6.32%* Hmean disk-161 220951.51 ( 0.00%) 207563.39 * -6.06%* Hmean disk-201 208722.74 ( 0.00%) 203235.59 ( -2.63%) Hmean disk-241 222051.60 ( 0.00%) 217705.51 ( -1.96%) Hmean disk-281 252244.17 ( 0.00%) 241132.72 * -4.41%* Hmean disk-321 255844.84 ( 0.00%) 245412.84 * -4.08%* Also this is causing huge regression (time increased by a factor of 5 or so) when untarring archive with lots of small files on some eMMC storage cards. Fix the problem by making sure we try goal group first. Fixes: 196e402 ("ext4: improve cr 0 / cr 1 group scanning") CC: stable@kernel.org Reported-and-tested-by: Stefan Wahren <stefan.wahren@i2se.com> Tested-by: Ojaswin Mujoo <ojaswin@linux.ibm.com> Reviewed-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com> Link: https://lore.kernel.org/all/20220727105123.ckwrhbilzrxqpt24@quack3/ Link: https://lore.kernel.org/all/0d81a7c2-46b7-6010-62a4-3e6cfc1628d6@i2se.com/ Signed-off-by: Jan Kara <jack@suse.cz> Link: https://lore.kernel.org/r/20220908092136.11770-1-jack@suse.cz Signed-off-by: Theodore Ts'o <tytso@mit.edu>
1 parent 7e18e42 commit 4fca50d

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

fs/ext4/mballoc.c

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1049,8 +1049,10 @@ static void ext4_mb_choose_next_group(struct ext4_allocation_context *ac,
10491049
{
10501050
*new_cr = ac->ac_criteria;
10511051

1052-
if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining)
1052+
if (!should_optimize_scan(ac) || ac->ac_groups_linear_remaining) {
1053+
*group = next_linear_group(ac, *group, ngroups);
10531054
return;
1055+
}
10541056

10551057
if (*new_cr == 0) {
10561058
ext4_mb_choose_next_group_cr0(ac, new_cr, group, ngroups);
@@ -2636,7 +2638,7 @@ static noinline_for_stack int
26362638
ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
26372639
{
26382640
ext4_group_t prefetch_grp = 0, ngroups, group, i;
2639-
int cr = -1;
2641+
int cr = -1, new_cr;
26402642
int err = 0, first_err = 0;
26412643
unsigned int nr = 0, prefetch_ios = 0;
26422644
struct ext4_sb_info *sbi;
@@ -2711,13 +2713,11 @@ ext4_mb_regular_allocator(struct ext4_allocation_context *ac)
27112713
ac->ac_groups_linear_remaining = sbi->s_mb_max_linear_groups;
27122714
prefetch_grp = group;
27132715

2714-
for (i = 0; i < ngroups; group = next_linear_group(ac, group, ngroups),
2715-
i++) {
2716-
int ret = 0, new_cr;
2716+
for (i = 0, new_cr = cr; i < ngroups; i++,
2717+
ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups)) {
2718+
int ret = 0;
27172719

27182720
cond_resched();
2719-
2720-
ext4_mb_choose_next_group(ac, &new_cr, &group, ngroups);
27212721
if (new_cr != cr) {
27222722
cr = new_cr;
27232723
goto repeat;

0 commit comments

Comments
 (0)