Skip to content

Commit 256aab4

Browse files
bvanasscheaxboe
authored andcommitted
Revert "block/mq-deadline: use correct way to throttling write requests"
The code "max(1U, 3 * (1U << shift) / 4)" comes from the Kyber I/O scheduler. The Kyber I/O scheduler maintains one internal queue per hwq and hence derives its async_depth from the number of hwq tags. Using this approach for the mq-deadline scheduler is wrong since the mq-deadline scheduler maintains one internal queue for all hwqs combined. Hence this revert. Cc: stable@vger.kernel.org Cc: Damien Le Moal <dlemoal@kernel.org> Cc: Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com> Cc: Zhiguo Niu <Zhiguo.Niu@unisoc.com> Fixes: d47f971 ("block/mq-deadline: use correct way to throttling write requests") Signed-off-by: Bart Van Assche <bvanassche@acm.org> Link: https://lore.kernel.org/r/20240313214218.1736147-1-bvanassche@acm.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent b874d4a commit 256aab4

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

block/mq-deadline.c

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -646,9 +646,8 @@ static void dd_depth_updated(struct blk_mq_hw_ctx *hctx)
646646
struct request_queue *q = hctx->queue;
647647
struct deadline_data *dd = q->elevator->elevator_data;
648648
struct blk_mq_tags *tags = hctx->sched_tags;
649-
unsigned int shift = tags->bitmap_tags.sb.shift;
650649

651-
dd->async_depth = max(1U, 3 * (1U << shift) / 4);
650+
dd->async_depth = max(1UL, 3 * q->nr_requests / 4);
652651

653652
sbitmap_queue_min_shallow_depth(&tags->bitmap_tags, dd->async_depth);
654653
}

0 commit comments

Comments
 (0)