Skip to content

Commit 76b367a

Browse files
committed
io_uring/net: limit inline multishot retries
If we have multiple clients and some/all are flooding the receives to such an extent that we can retry a LOT handling multishot receives, then we can be starving some clients and hence serving traffic in an imbalanced fashion. Limit multishot retry attempts to some arbitrary value, whose only purpose serves to ensure that we don't keep serving a single connection for way too long. We default to 32 retries, which should be more than enough to provide fairness, yet not so small that we'll spend too much time requeuing rather than handling traffic. Cc: stable@vger.kernel.org Depends-on: 704ea88 ("io_uring/poll: add requeue return code from poll multishot handling") Depends-on: 1e5d765a82f ("io_uring/net: un-indent mshot retry path in io_recv_finish()") Depends-on: e84b01a ("io_uring/poll: move poll execution helpers higher up") Fixes: b3fdea6 ("io_uring: multishot recv") Fixes: 9bb6690 ("io_uring: support multishot in recvmsg") Link: axboe/liburing#1043 Signed-off-by: Jens Axboe <axboe@kernel.dk>
1 parent 704ea88 commit 76b367a

File tree

1 file changed

+20
-3
lines changed

1 file changed

+20
-3
lines changed

io_uring/net.c

Lines changed: 20 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -60,6 +60,7 @@ struct io_sr_msg {
6060
unsigned len;
6161
unsigned done_io;
6262
unsigned msg_flags;
63+
unsigned nr_multishot_loops;
6364
u16 flags;
6465
/* initialised and used only by !msg send variants */
6566
u16 addr_len;
@@ -70,6 +71,13 @@ struct io_sr_msg {
7071
struct io_kiocb *notif;
7172
};
7273

74+
/*
75+
* Number of times we'll try and do receives if there's more data. If we
76+
* exceed this limit, then add us to the back of the queue and retry from
77+
* there. This helps fairness between flooding clients.
78+
*/
79+
#define MULTISHOT_MAX_RETRY 32
80+
7381
static inline bool io_check_multishot(struct io_kiocb *req,
7482
unsigned int issue_flags)
7583
{
@@ -611,6 +619,7 @@ int io_recvmsg_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
611619
sr->msg_flags |= MSG_CMSG_COMPAT;
612620
#endif
613621
sr->done_io = 0;
622+
sr->nr_multishot_loops = 0;
614623
return 0;
615624
}
616625

@@ -654,12 +663,20 @@ static inline bool io_recv_finish(struct io_kiocb *req, int *ret,
654663
*/
655664
if (io_fill_cqe_req_aux(req, issue_flags & IO_URING_F_COMPLETE_DEFER,
656665
*ret, cflags | IORING_CQE_F_MORE)) {
666+
struct io_sr_msg *sr = io_kiocb_to_cmd(req, struct io_sr_msg);
667+
int mshot_retry_ret = IOU_ISSUE_SKIP_COMPLETE;
668+
657669
io_recv_prep_retry(req);
658670
/* Known not-empty or unknown state, retry */
659-
if (cflags & IORING_CQE_F_SOCK_NONEMPTY || msg->msg_inq == -1)
660-
return false;
671+
if (cflags & IORING_CQE_F_SOCK_NONEMPTY || msg->msg_inq == -1) {
672+
if (sr->nr_multishot_loops++ < MULTISHOT_MAX_RETRY)
673+
return false;
674+
/* mshot retries exceeded, force a requeue */
675+
sr->nr_multishot_loops = 0;
676+
mshot_retry_ret = IOU_REQUEUE;
677+
}
661678
if (issue_flags & IO_URING_F_MULTISHOT)
662-
*ret = IOU_ISSUE_SKIP_COMPLETE;
679+
*ret = mshot_retry_ret;
663680
else
664681
*ret = -EAGAIN;
665682
return true;

0 commit comments

Comments
 (0)