Skip to content

Commit c0707c9

Browse files
josefbacikkdave
authored andcommitted
btrfs: push the extent lock into btrfs_run_delalloc_range
We want to limit the scope of the extent lock to be around operations that can change in flight. Currently we hold the extent lock through the entire writepage operation, which isn't really necessary. We want to protect to make sure nobody has updated DELALLOC. In find_lock_delalloc_range we must lock the range in order to validate the contents of our io_tree. However once we've done that we're safe to unlock the range and continue, as we have the page lock already held for the range. We are protected from all operations at this point. * mmap() - we're holding the page lock, thus are protected. * buffered writes - again, we're protected because we take the page lock for the first and last page in our range for buffered writes so we won't create new delalloc ranges in this area. * direct IO - we invalidate pagecache before attempting to write a new area, which requires the page lock, so again are protected once we're holding the page lock on this range. Additionally this behavior actually already exists for compressed, we unlock the range as soon as we start to process the async extents, and re-lock it during compression. So this is completely safe, and makes the locking more consistent. Make this simple by just pushing the extent lock into btrfs_run_delalloc_range. From there followup patches will push the lock further down into its users. Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com> Signed-off-by: Josef Bacik <josef@toxicpanda.com> Signed-off-by: David Sterba <dsterba@suse.com>
1 parent 7034674 commit c0707c9

File tree

2 files changed

+7
-3
lines changed

2 files changed

+7
-3
lines changed

fs/btrfs/extent_io.c

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -396,15 +396,14 @@ noinline_for_stack bool find_lock_delalloc_range(struct inode *inode,
396396
/* then test to make sure it is all still delalloc */
397397
ret = test_range_bit(tree, delalloc_start, delalloc_end,
398398
EXTENT_DELALLOC, cached_state);
399+
400+
unlock_extent(tree, delalloc_start, delalloc_end, &cached_state);
399401
if (!ret) {
400-
unlock_extent(tree, delalloc_start, delalloc_end,
401-
&cached_state);
402402
__unlock_for_delalloc(inode, locked_page,
403403
delalloc_start, delalloc_end);
404404
cond_resched();
405405
goto again;
406406
}
407-
free_extent_state(cached_state);
408407
*start = delalloc_start;
409408
*end = delalloc_end;
410409
out_failed:

fs/btrfs/inode.c

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2249,6 +2249,11 @@ int btrfs_run_delalloc_range(struct btrfs_inode *inode, struct page *locked_page
22492249
const bool zoned = btrfs_is_zoned(inode->root->fs_info);
22502250
int ret;
22512251

2252+
/*
2253+
* We're unlocked by the different fill functions below.
2254+
*/
2255+
lock_extent(&inode->io_tree, start, end, NULL);
2256+
22522257
/*
22532258
* The range must cover part of the @locked_page, or a return of 1
22542259
* can confuse the caller.

0 commit comments

Comments
 (0)