Skip to content
This repository was archived by the owner on Nov 8, 2023. It is now read-only.

Commit d13ddd9

Browse files
committed
io_uring/sqpoll: ensure that normal task_work is also run timely
With the move to private task_work, SQPOLL neglected to also run the normal task_work, if any is pending. This will eventually get run, but we should run it with the private task_work to ensure that things like a final fput() is processed in a timely fashion. Cc: stable@vger.kernel.org Link: https://lore.kernel.org/all/313824bc-799d-414f-96b7-e6de57c7e21d@gmail.com/ Reported-by: Andrew Udvare <audvare@gmail.com> Fixes: af5d68f ("io_uring/sqpoll: manage task_work privately") Tested-by: Christian Heusel <christian@heusel.eu> Tested-by: Andrew Udvare <audvare@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent b9dd56e commit d13ddd9

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

io_uring/sqpoll.c

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -238,11 +238,13 @@ static unsigned int io_sq_tw(struct llist_node **retry_list, int max_entries)
238238
if (*retry_list) {
239239
*retry_list = io_handle_tw_list(*retry_list, &count, max_entries);
240240
if (count >= max_entries)
241-
return count;
241+
goto out;
242242
max_entries -= count;
243243
}
244-
245244
*retry_list = tctx_task_work_run(tctx, max_entries, &count);
245+
out:
246+
if (task_work_pending(current))
247+
task_work_run();
246248
return count;
247249
}
248250

0 commit comments

Comments
 (0)