Skip to content

Commit 798dec3

Browse files
committed
x86-64: mm: clarify the 'positive addresses' user address rules
Dave Hansen found the "(long) addr >= 0" code in the x86-64 access_ok checks somewhat confusing, and suggested using a helper to clarify what the code is doing. So this does exactly that: clarifying what the sign bit check is all about, by adding a helper macro that makes it clear what it is testing. This also adds some explicit comments talking about how even with LAM enabled, any addresses with the sign bit will still GP-fault in the non-canonical region just above the sign bit. This is all what allows us to do the user address checks with just the sign bit, and furthermore be a bit cavalier about accesses that might be done with an additional offset even past that point. (And yes, this talks about 'positive' even though zero is also a valid user address and so technically we should call them 'non-negative'. But I don't think using 'non-negative' ends up being more understandable). Suggested-by: Dave Hansen <dave.hansen@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
1 parent 1dbc0a9 commit 798dec3

File tree

2 files changed

+33
-15
lines changed

2 files changed

+33
-15
lines changed

arch/x86/include/asm/uaccess_64.h

Lines changed: 31 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -50,27 +50,45 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm,
5050
#endif
5151

5252
/*
53-
* On x86-64, we may have tag bits in the user pointer. Rather than
54-
* mask them off, just change the rules for __access_ok().
53+
* The virtual address space space is logically divided into a kernel
54+
* half and a user half. When cast to a signed type, user pointers
55+
* are positive and kernel pointers are negative.
56+
*/
57+
#define valid_user_address(x) ((long)(x) >= 0)
58+
59+
/*
60+
* User pointers can have tag bits on x86-64. This scheme tolerates
61+
* arbitrary values in those bits rather then masking them off.
62+
*
63+
* Enforce two rules:
64+
* 1. 'ptr' must be in the user half of the address space
65+
* 2. 'ptr+size' must not overflow into kernel addresses
66+
*
67+
* Note that addresses around the sign change are not valid addresses,
68+
* and will GP-fault even with LAM enabled if the sign bit is set (see
69+
* "CR3.LAM_SUP" that can narrow the canonicality check if we ever
70+
* enable it, but not remove it entirely).
71+
*
72+
* So the "overflow into kernel addresses" does not imply some sudden
73+
* exact boundary at the sign bit, and we can allow a lot of slop on the
74+
* size check.
5575
*
56-
* Make the rule be that 'ptr+size' must not overflow, and must not
57-
* have the high bit set. Compilers generally understand about
58-
* unsigned overflow and the CF bit and generate reasonable code for
59-
* this. Although it looks like the combination confuses at least
60-
* clang (and instead of just doing an "add" followed by a test of
61-
* SF and CF, you'll see that unnecessary comparison).
76+
* In fact, we could probably remove the size check entirely, since
77+
* any kernel accesses will be in increasing address order starting
78+
* at 'ptr', and even if the end might be in kernel space, we'll
79+
* hit the GP faults for non-canonical accesses before we ever get
80+
* there.
6281
*
63-
* For the common case of small sizes that can be checked at compile
64-
* time, don't even bother with the addition, and just check that the
65-
* base pointer is ok.
82+
* That's a separate optimization, for now just handle the small
83+
* constant case.
6684
*/
6785
static inline bool __access_ok(const void __user *ptr, unsigned long size)
6886
{
6987
if (__builtin_constant_p(size <= PAGE_SIZE) && size <= PAGE_SIZE) {
70-
return (long)ptr >= 0;
88+
return valid_user_address(ptr);
7189
} else {
7290
unsigned long sum = size + (unsigned long)ptr;
73-
return (long) sum >= 0 && sum >= (unsigned long)ptr;
91+
return valid_user_address(sum) && sum >= (unsigned long)ptr;
7492
}
7593
}
7694
#define __access_ok __access_ok

arch/x86/mm/extable.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -143,12 +143,12 @@ static bool gp_fault_address_ok(unsigned long fault_address)
143143
{
144144
#ifdef CONFIG_X86_64
145145
/* Is it in the "user space" part of the non-canonical space? */
146-
if ((long) fault_address >= 0)
146+
if (valid_user_address(fault_address))
147147
return true;
148148

149149
/* .. or just above it? */
150150
fault_address -= PAGE_SIZE;
151-
if ((long) fault_address >= 0)
151+
if (valid_user_address(fault_address))
152152
return true;
153153
#endif
154154
return false;

0 commit comments

Comments
 (0)