Skip to content

Commit 5de0219

Browse files
dhowellsbrauner
authored andcommitted
netfs: Fix setting NETFS_RREQ_ALL_QUEUED to be after all subreqs queued
Due to the code that queues a subreq on the active subrequest list getting moved to netfs_issue_read(), the NETFS_RREQ_ALL_QUEUED flag may now get set before the list-add actually happens. This is not a problem if the collection worker happens after the list-add, but it's a race - and, for 9P, where the read from the server is synchronous and done in the submitting thread, this is a lot more likely. The result is that, if the timing is wrong, a ref gets leaked because the collector thinks that all the subreqs have completed (because it can't see the last one yet) and clears NETFS_RREQ_IN_PROGRESS - at which point, the collection worker no longer goes into the collector. This can be provoked with AFS by injecting an msleep() right before the final subreq is queued. Fix this by splitting the queuing part out of netfs_issue_read() into a new function, netfs_queue_read(), and calling it separately. The setting of NETFS_RREQ_ALL_QUEUED is then done by netfs_queue_read() whilst it is holding the spinlock (that's probably unnecessary, but shouldn't hurt). It might be better to set a flag on the final subreq, but this could be a problem if an error occurs and we can't queue it. Fixes: e2d46f2 ("netfs: Change the read result collector to only use one work item") Reported-by: Ihor Solodrai <ihor.solodrai@pm.me> Closes: https://lore.kernel.org/r/a7x33d4dnMdGTtRivptq6S1i8btK70SNBP2XyX_xwDAhLvgQoPox6FVBOkifq4eBinfFfbZlIkMZBe3QarlWTxoEtHZwJCZbNKtaqrR7PvI=@pm.me/ Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/r/20250212222402.3618494-4-dhowells@redhat.com Tested-by: Ihor Solodrai <ihor.solodrai@linux.dev> cc: Eric Van Hensbergen <ericvh@kernel.org> cc: Latchesar Ionkov <lucho@ionkov.net> cc: Dominique Martinet <asmadeus@codewreck.org> cc: Christian Schoenebeck <linux_oss@crudebyte.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Steve French <stfrench@microsoft.com> cc: Paulo Alcantara <pc@manguebit.com> cc: Jeff Layton <jlayton@kernel.org> cc: v9fs@lists.linux.dev cc: linux-cifs@vger.kernel.org cc: netfs@lists.linux.dev cc: linux-fsdevel@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
1 parent d01c495 commit 5de0219

File tree

1 file changed

+13
-6
lines changed

1 file changed

+13
-6
lines changed

fs/netfs/buffered_read.c

Lines changed: 13 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -155,8 +155,9 @@ static void netfs_read_cache_to_pagecache(struct netfs_io_request *rreq,
155155
netfs_cache_read_terminated, subreq);
156156
}
157157

158-
static void netfs_issue_read(struct netfs_io_request *rreq,
159-
struct netfs_io_subrequest *subreq)
158+
static void netfs_queue_read(struct netfs_io_request *rreq,
159+
struct netfs_io_subrequest *subreq,
160+
bool last_subreq)
160161
{
161162
struct netfs_io_stream *stream = &rreq->io_streams[0];
162163

@@ -177,8 +178,17 @@ static void netfs_issue_read(struct netfs_io_request *rreq,
177178
}
178179
}
179180

181+
if (last_subreq) {
182+
smp_wmb(); /* Write lists before ALL_QUEUED. */
183+
set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
184+
}
185+
180186
spin_unlock(&rreq->lock);
187+
}
181188

189+
static void netfs_issue_read(struct netfs_io_request *rreq,
190+
struct netfs_io_subrequest *subreq)
191+
{
182192
switch (subreq->source) {
183193
case NETFS_DOWNLOAD_FROM_SERVER:
184194
rreq->netfs_ops->issue_read(subreq);
@@ -293,11 +303,8 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)
293303
}
294304
size -= slice;
295305
start += slice;
296-
if (size <= 0) {
297-
smp_wmb(); /* Write lists before ALL_QUEUED. */
298-
set_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);
299-
}
300306

307+
netfs_queue_read(rreq, subreq, size <= 0);
301308
netfs_issue_read(rreq, subreq);
302309
cond_resched();
303310
} while (size > 0);

0 commit comments

Comments
 (0)