Skip to content
This repository was archived by the owner on Nov 8, 2023. It is now read-only.

Commit 2439a5e

Browse files
committed
Merge tag 'x86_bugs_for_v6.11_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 cpu mitigation updates from Borislav Petkov: - Add a spectre_bhi=vmexit mitigation option aimed at cloud environments - Remove duplicated Spectre cmdline option documentation - Add separate macro definitions for syscall handlers which do not return in order to address objtool warnings * tag 'x86_bugs_for_v6.11_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/bugs: Add 'spectre_bhi=vmexit' cmdline option x86/bugs: Remove duplicate Spectre cmdline option descriptions x86/syscall: Mark exit[_group] syscall handlers __noreturn
2 parents f998678 + 42c141f commit 2439a5e

File tree

12 files changed

+86
-109
lines changed

12 files changed

+86
-109
lines changed

Documentation/admin-guide/hw-vuln/spectre.rst

Lines changed: 10 additions & 76 deletions
Original file line numberDiff line numberDiff line change
@@ -592,85 +592,19 @@ Spectre variant 2
592592
Mitigation control on the kernel command line
593593
---------------------------------------------
594594

595-
Spectre variant 2 mitigation can be disabled or force enabled at the
596-
kernel command line.
595+
In general the kernel selects reasonable default mitigations for the
596+
current CPU.
597597

598-
nospectre_v1
598+
Spectre default mitigations can be disabled or changed at the kernel
599+
command line with the following options:
599600

600-
[X86,PPC] Disable mitigations for Spectre Variant 1
601-
(bounds check bypass). With this option data leaks are
602-
possible in the system.
601+
- nospectre_v1
602+
- nospectre_v2
603+
- spectre_v2={option}
604+
- spectre_v2_user={option}
605+
- spectre_bhi={option}
603606

604-
nospectre_v2
605-
606-
[X86] Disable all mitigations for the Spectre variant 2
607-
(indirect branch prediction) vulnerability. System may
608-
allow data leaks with this option, which is equivalent
609-
to spectre_v2=off.
610-
611-
612-
spectre_v2=
613-
614-
[X86] Control mitigation of Spectre variant 2
615-
(indirect branch speculation) vulnerability.
616-
The default operation protects the kernel from
617-
user space attacks.
618-
619-
on
620-
unconditionally enable, implies
621-
spectre_v2_user=on
622-
off
623-
unconditionally disable, implies
624-
spectre_v2_user=off
625-
auto
626-
kernel detects whether your CPU model is
627-
vulnerable
628-
629-
Selecting 'on' will, and 'auto' may, choose a
630-
mitigation method at run time according to the
631-
CPU, the available microcode, the setting of the
632-
CONFIG_MITIGATION_RETPOLINE configuration option,
633-
and the compiler with which the kernel was built.
634-
635-
Selecting 'on' will also enable the mitigation
636-
against user space to user space task attacks.
637-
638-
Selecting 'off' will disable both the kernel and
639-
the user space protections.
640-
641-
Specific mitigations can also be selected manually:
642-
643-
retpoline auto pick between generic,lfence
644-
retpoline,generic Retpolines
645-
retpoline,lfence LFENCE; indirect branch
646-
retpoline,amd alias for retpoline,lfence
647-
eibrs Enhanced/Auto IBRS
648-
eibrs,retpoline Enhanced/Auto IBRS + Retpolines
649-
eibrs,lfence Enhanced/Auto IBRS + LFENCE
650-
ibrs use IBRS to protect kernel
651-
652-
Not specifying this option is equivalent to
653-
spectre_v2=auto.
654-
655-
In general the kernel by default selects
656-
reasonable mitigations for the current CPU. To
657-
disable Spectre variant 2 mitigations, boot with
658-
spectre_v2=off. Spectre variant 1 mitigations
659-
cannot be disabled.
660-
661-
spectre_bhi=
662-
663-
[X86] Control mitigation of Branch History Injection
664-
(BHI) vulnerability. This setting affects the deployment
665-
of the HW BHI control and the SW BHB clearing sequence.
666-
667-
on
668-
(default) Enable the HW or SW mitigation as
669-
needed.
670-
off
671-
Disable the mitigation.
672-
673-
For spectre_v2_user see Documentation/admin-guide/kernel-parameters.txt
607+
For more details on the available options, refer to Documentation/admin-guide/kernel-parameters.txt
674608

675609
Mitigation selection guide
676610
--------------------------

Documentation/admin-guide/kernel-parameters.txt

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6125,9 +6125,15 @@
61256125
deployment of the HW BHI control and the SW BHB
61266126
clearing sequence.
61276127

6128-
on - (default) Enable the HW or SW mitigation
6129-
as needed.
6130-
off - Disable the mitigation.
6128+
on - (default) Enable the HW or SW mitigation as
6129+
needed. This protects the kernel from
6130+
both syscalls and VMs.
6131+
vmexit - On systems which don't have the HW mitigation
6132+
available, enable the SW mitigation on vmexit
6133+
ONLY. On such systems, the host kernel is
6134+
protected from VM-originated BHI attacks, but
6135+
may still be vulnerable to syscall attacks.
6136+
off - Disable the mitigation.
61316137

61326138
spectre_v2= [X86,EARLY] Control mitigation of Spectre variant 2
61336139
(indirect branch speculation) vulnerability.

arch/x86/entry/syscall_32.c

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,12 @@
1414
#endif
1515

1616
#define __SYSCALL(nr, sym) extern long __ia32_##sym(const struct pt_regs *);
17-
17+
#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __ia32_##sym(const struct pt_regs *);
1818
#include <asm/syscalls_32.h>
19-
#undef __SYSCALL
19+
#undef __SYSCALL
20+
21+
#undef __SYSCALL_NORETURN
22+
#define __SYSCALL_NORETURN __SYSCALL
2023

2124
/*
2225
* The sys_call_table[] is no longer used for system calls, but
@@ -28,11 +31,10 @@
2831
const sys_call_ptr_t sys_call_table[] = {
2932
#include <asm/syscalls_32.h>
3033
};
31-
#undef __SYSCALL
34+
#undef __SYSCALL
3235
#endif
3336

3437
#define __SYSCALL(nr, sym) case nr: return __ia32_##sym(regs);
35-
3638
long ia32_sys_call(const struct pt_regs *regs, unsigned int nr)
3739
{
3840
switch (nr) {

arch/x86/entry/syscall_64.c

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,12 @@
88
#include <asm/syscall.h>
99

1010
#define __SYSCALL(nr, sym) extern long __x64_##sym(const struct pt_regs *);
11+
#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __x64_##sym(const struct pt_regs *);
1112
#include <asm/syscalls_64.h>
12-
#undef __SYSCALL
13+
#undef __SYSCALL
14+
15+
#undef __SYSCALL_NORETURN
16+
#define __SYSCALL_NORETURN __SYSCALL
1317

1418
/*
1519
* The sys_call_table[] is no longer used for system calls, but
@@ -20,10 +24,9 @@
2024
const sys_call_ptr_t sys_call_table[] = {
2125
#include <asm/syscalls_64.h>
2226
};
23-
#undef __SYSCALL
27+
#undef __SYSCALL
2428

2529
#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
26-
2730
long x64_sys_call(const struct pt_regs *regs, unsigned int nr)
2831
{
2932
switch (nr) {

arch/x86/entry/syscall_x32.c

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,11 +8,14 @@
88
#include <asm/syscall.h>
99

1010
#define __SYSCALL(nr, sym) extern long __x64_##sym(const struct pt_regs *);
11+
#define __SYSCALL_NORETURN(nr, sym) extern long __noreturn __x64_##sym(const struct pt_regs *);
1112
#include <asm/syscalls_x32.h>
12-
#undef __SYSCALL
13+
#undef __SYSCALL
1314

14-
#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
15+
#undef __SYSCALL_NORETURN
16+
#define __SYSCALL_NORETURN __SYSCALL
1517

18+
#define __SYSCALL(nr, sym) case nr: return __x64_##sym(regs);
1619
long x32_sys_call(const struct pt_regs *regs, unsigned int nr)
1720
{
1821
switch (nr) {

arch/x86/entry/syscalls/syscall_32.tbl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# 32-bit system call numbers and entry vectors
44
#
55
# The format is:
6-
# <number> <abi> <name> <entry point> <compat entry point>
6+
# <number> <abi> <name> <entry point> [<compat entry point> [noreturn]]
77
#
88
# The __ia32_sys and __ia32_compat_sys stubs are created on-the-fly for
99
# sys_*() system calls and compat_sys_*() compat system calls if
@@ -13,7 +13,7 @@
1313
# The abi is always "i386" for this file.
1414
#
1515
0 i386 restart_syscall sys_restart_syscall
16-
1 i386 exit sys_exit
16+
1 i386 exit sys_exit - noreturn
1717
2 i386 fork sys_fork
1818
3 i386 read sys_read
1919
4 i386 write sys_write
@@ -264,7 +264,7 @@
264264
249 i386 io_cancel sys_io_cancel
265265
250 i386 fadvise64 sys_ia32_fadvise64
266266
# 251 is available for reuse (was briefly sys_set_zone_reclaim)
267-
252 i386 exit_group sys_exit_group
267+
252 i386 exit_group sys_exit_group - noreturn
268268
253 i386 lookup_dcookie
269269
254 i386 epoll_create sys_epoll_create
270270
255 i386 epoll_ctl sys_epoll_ctl

arch/x86/entry/syscalls/syscall_64.tbl

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
# 64-bit system call numbers and entry vectors
44
#
55
# The format is:
6-
# <number> <abi> <name> <entry point>
6+
# <number> <abi> <name> <entry point> [<compat entry point> [noreturn]]
77
#
88
# The __x64_sys_*() stubs are created on-the-fly for sys_*() system calls
99
#
@@ -69,7 +69,7 @@
6969
57 common fork sys_fork
7070
58 common vfork sys_vfork
7171
59 64 execve sys_execve
72-
60 common exit sys_exit
72+
60 common exit sys_exit - noreturn
7373
61 common wait4 sys_wait4
7474
62 common kill sys_kill
7575
63 common uname sys_newuname
@@ -240,7 +240,7 @@
240240
228 common clock_gettime sys_clock_gettime
241241
229 common clock_getres sys_clock_getres
242242
230 common clock_nanosleep sys_clock_nanosleep
243-
231 common exit_group sys_exit_group
243+
231 common exit_group sys_exit_group - noreturn
244244
232 common epoll_wait sys_epoll_wait
245245
233 common epoll_ctl sys_epoll_ctl
246246
234 common tgkill sys_tgkill

arch/x86/kernel/cpu/bugs.c

Lines changed: 11 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1625,6 +1625,7 @@ static bool __init spec_ctrl_bhi_dis(void)
16251625
enum bhi_mitigations {
16261626
BHI_MITIGATION_OFF,
16271627
BHI_MITIGATION_ON,
1628+
BHI_MITIGATION_VMEXIT_ONLY,
16281629
};
16291630

16301631
static enum bhi_mitigations bhi_mitigation __ro_after_init =
@@ -1639,6 +1640,8 @@ static int __init spectre_bhi_parse_cmdline(char *str)
16391640
bhi_mitigation = BHI_MITIGATION_OFF;
16401641
else if (!strcmp(str, "on"))
16411642
bhi_mitigation = BHI_MITIGATION_ON;
1643+
else if (!strcmp(str, "vmexit"))
1644+
bhi_mitigation = BHI_MITIGATION_VMEXIT_ONLY;
16421645
else
16431646
pr_err("Ignoring unknown spectre_bhi option (%s)", str);
16441647

@@ -1659,19 +1662,22 @@ static void __init bhi_select_mitigation(void)
16591662
return;
16601663
}
16611664

1665+
/* Mitigate in hardware if supported */
16621666
if (spec_ctrl_bhi_dis())
16631667
return;
16641668

16651669
if (!IS_ENABLED(CONFIG_X86_64))
16661670
return;
16671671

1668-
/* Mitigate KVM by default */
1669-
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
1670-
pr_info("Spectre BHI mitigation: SW BHB clearing on vm exit\n");
1672+
if (bhi_mitigation == BHI_MITIGATION_VMEXIT_ONLY) {
1673+
pr_info("Spectre BHI mitigation: SW BHB clearing on VM exit only\n");
1674+
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
1675+
return;
1676+
}
16711677

1672-
/* Mitigate syscalls when the mitigation is forced =on */
1678+
pr_info("Spectre BHI mitigation: SW BHB clearing on syscall and VM exit\n");
16731679
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP);
1674-
pr_info("Spectre BHI mitigation: SW BHB clearing on syscall\n");
1680+
setup_force_cpu_cap(X86_FEATURE_CLEAR_BHB_LOOP_ON_VMEXIT);
16751681
}
16761682

16771683
static void __init spectre_v2_select_mitigation(void)

arch/x86/um/sys_call_table_32.c

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,10 @@
99
#include <linux/cache.h>
1010
#include <asm/syscall.h>
1111

12+
extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long,
13+
unsigned long, unsigned long,
14+
unsigned long, unsigned long);
15+
1216
/*
1317
* Below you can see, in terms of #define's, the differences between the x86-64
1418
* and the UML syscall table.
@@ -22,15 +26,13 @@
2226
#define sys_vm86 sys_ni_syscall
2327

2428
#define __SYSCALL_WITH_COMPAT(nr, native, compat) __SYSCALL(nr, native)
29+
#define __SYSCALL_NORETURN __SYSCALL
2530

2631
#define __SYSCALL(nr, sym) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
2732
#include <asm/syscalls_32.h>
33+
#undef __SYSCALL
2834

29-
#undef __SYSCALL
3035
#define __SYSCALL(nr, sym) sym,
31-
32-
extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
33-
3436
const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {
3537
#include <asm/syscalls_32.h>
3638
};

arch/x86/um/sys_call_table_64.c

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,10 @@
99
#include <linux/cache.h>
1010
#include <asm/syscall.h>
1111

12+
extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long,
13+
unsigned long, unsigned long,
14+
unsigned long, unsigned long);
15+
1216
/*
1317
* Below you can see, in terms of #define's, the differences between the x86-64
1418
* and the UML syscall table.
@@ -18,14 +22,13 @@
1822
#define sys_iopl sys_ni_syscall
1923
#define sys_ioperm sys_ni_syscall
2024

25+
#define __SYSCALL_NORETURN __SYSCALL
26+
2127
#define __SYSCALL(nr, sym) extern asmlinkage long sym(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
2228
#include <asm/syscalls_64.h>
29+
#undef __SYSCALL
2330

24-
#undef __SYSCALL
2531
#define __SYSCALL(nr, sym) sym,
26-
27-
extern asmlinkage long sys_ni_syscall(unsigned long, unsigned long, unsigned long, unsigned long, unsigned long, unsigned long);
28-
2932
const sys_call_ptr_t sys_call_table[] ____cacheline_aligned = {
3033
#include <asm/syscalls_64.h>
3134
};

0 commit comments

Comments
 (0)