Skip to content

Commit 47cf96f

Browse files
committed
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
Pull arm64 updates from Will Deacon: "The headline feature is the re-enablement of support for Arm's Scalable Matrix Extension (SME) thanks to a bumper crop of fixes from Mark Rutland. If matrices aren't your thing, then Ryan's page-table optimisation work is much more interesting. Summary: ACPI, EFI and PSCI: - Decouple Arm's "Software Delegated Exception Interface" (SDEI) support from the ACPI GHES code so that it can be used by platforms booted with device-tree - Remove unnecessary per-CPU tracking of the FPSIMD state across EFI runtime calls - Fix a node refcount imbalance in the PSCI device-tree code CPU Features: - Ensure register sanitisation is applied to fields in ID_AA64MMFR4 - Expose AIDR_EL1 to userspace via sysfs, primarily so that KVM guests can reliably query the underlying CPU types from the VMM - Re-enabling of SME support (CONFIG_ARM64_SME) as a result of fixes to our context-switching, signal handling and ptrace code Entry code: - Hook up TIF_NEED_RESCHED_LAZY so that CONFIG_PREEMPT_LAZY can be selected Memory management: - Prevent BSS exports from being used by the early PI code - Propagate level and stride information to the low-level TLB invalidation routines when operating on hugetlb entries - Use the page-table contiguous hint for vmap() mappings with VM_ALLOW_HUGE_VMAP where possible - Optimise vmalloc()/vmap() page-table updates to use "lazy MMU mode" and hook this up on arm64 so that the trailing DSB (used to publish the updates to the hardware walker) can be deferred until the end of the mapping operation - Extend mmap() randomisation for 52-bit virtual addresses (on par with 48-bit addressing) and remove limited support for randomisation of the linear map Perf and PMUs: - Add support for probing the CMN-S3 driver using ACPI - Minor driver fixes to the CMN, Arm-NI and amlogic PMU drivers Selftests: - Fix FPSIMD and SME tests to align with the freshly re-enabled SME support - Fix default setting of the OUTPUT variable so that tests are installed in the right location vDSO: - Replace raw counter access from inline assembly code with a call to the the __arch_counter_get_cntvct() helper function Miscellaneous: - Add some missing header inclusions to the CCA headers - Rework rendering of /proc/cpuinfo to follow the x86-approach and avoid repeated buffer expansion (the user-visible format remains identical) - Remove redundant selection of CONFIG_CRC32 - Extend early error message when failing to map the device-tree blob" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (83 commits) arm64: cputype: Add cputype definition for HIP12 arm64: el2_setup.h: Make __init_el2_fgt labels consistent, again perf/arm-cmn: Add CMN S3 ACPI binding arm64/boot: Disallow BSS exports to startup code arm64/boot: Move global CPU override variables out of BSS arm64/boot: Move init_pgdir[] and init_idmap_pgdir[] into __pi_ namespace perf/arm-cmn: Initialise cmn->cpu earlier kselftest/arm64: Set default OUTPUT path when undefined arm64: Update comment regarding values in __boot_cpu_mode arm64: mm: Drop redundant check in pmd_trans_huge() arm64/mm: Re-organise setting up FEAT_S1PIE registers PIRE0_EL1 and PIR_EL1 arm64/mm: Permit lazy_mmu_mode to be nested arm64/mm: Disable barrier batching in interrupt contexts arm64/cpuinfo: only show one cpu's info in c_show() arm64/mm: Batch barriers when updating kernel mappings mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes arm64/mm: Support huge pte-mapped pages in vmap mm/vmalloc: Gracefully unmap huge ptes mm/vmalloc: Warn on improper use of vunmap_range() arm64/mm: Hoist barriers out of set_ptes_anysz() loop ...
2 parents bbff27b + 217e3cb commit 47cf96f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

55 files changed

+1060
-891
lines changed

Documentation/ABI/testing/sysfs-devices-system-cpu

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -544,6 +544,7 @@ What: /sys/devices/system/cpu/cpuX/regs/
544544
/sys/devices/system/cpu/cpuX/regs/identification/
545545
/sys/devices/system/cpu/cpuX/regs/identification/midr_el1
546546
/sys/devices/system/cpu/cpuX/regs/identification/revidr_el1
547+
/sys/devices/system/cpu/cpuX/regs/identification/aidr_el1
547548
/sys/devices/system/cpu/cpuX/regs/identification/smidr_el1
548549
Date: June 2016
549550
Contact: Linux ARM Kernel Mailing list <linux-arm-kernel@lists.infradead.org>

Documentation/arch/arm64/cpu-feature-registers.rst

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -72,14 +72,15 @@ there are some issues with their usage.
7272
process could be migrated to another CPU by the time it uses the
7373
register value, unless the CPU affinity is set. Hence, there is no
7474
guarantee that the value reflects the processor that it is
75-
currently executing on. The REVIDR is not exposed due to this
76-
constraint, as REVIDR makes sense only in conjunction with the
77-
MIDR. Alternately, MIDR_EL1 and REVIDR_EL1 are exposed via sysfs
78-
at::
75+
currently executing on. REVIDR and AIDR are not exposed due to this
76+
constraint, as these registers only make sense in conjunction with
77+
the MIDR. Alternately, MIDR_EL1, REVIDR_EL1, and AIDR_EL1 are exposed
78+
via sysfs at::
7979

8080
/sys/devices/system/cpu/cpu$ID/regs/identification/
81-
\- midr
82-
\- revidr
81+
\- midr_el1
82+
\- revidr_el1
83+
\- aidr_el1
8384

8485
3. Implementation
8586
--------------------

Documentation/arch/arm64/sme.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -69,8 +69,8 @@ model features for SME is included in Appendix A.
6969
vectors from 0 to VL/8-1 stored in the same endianness invariant format as is
7070
used for SVE vectors.
7171

72-
* On thread creation TPIDR2_EL0 is preserved unless CLONE_SETTLS is specified,
73-
in which case it is set to 0.
72+
* On thread creation PSTATE.ZA and TPIDR2_EL0 are preserved unless CLONE_VM
73+
is specified, in which case PSTATE.ZA is set to 0 and TPIDR2_EL0 is set to 0.
7474

7575
2. Vector lengths
7676
------------------
@@ -115,7 +115,7 @@ be zeroed.
115115
5. Signal handling
116116
-------------------
117117

118-
* Signal handlers are invoked with streaming mode and ZA disabled.
118+
* Signal handlers are invoked with PSTATE.SM=0, PSTATE.ZA=0, and TPIDR2_EL0=0.
119119

120120
* A new signal frame record TPIDR2_MAGIC is added formatted as a struct
121121
tpidr2_context to allow access to TPIDR2_EL0 from signal handlers.
@@ -241,7 +241,7 @@ prctl(PR_SME_SET_VL, unsigned long arg)
241241
length, or calling PR_SME_SET_VL with the PR_SME_SET_VL_ONEXEC flag,
242242
does not constitute a change to the vector length for this purpose.
243243

244-
* Changing the vector length causes PSTATE.ZA and PSTATE.SM to be cleared.
244+
* Changing the vector length causes PSTATE.ZA to be cleared.
245245
Calling PR_SME_SET_VL with vl equal to the thread's current vector
246246
length, or calling PR_SME_SET_VL with the PR_SME_SET_VL_ONEXEC flag,
247247
does not constitute a change to the vector length for this purpose.

arch/arm64/Kconfig

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -42,6 +42,7 @@ config ARM64
4242
select ARCH_HAS_NMI_SAFE_THIS_CPU_OPS
4343
select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
4444
select ARCH_HAS_NONLEAF_PMD_YOUNG if ARM64_HAFT
45+
select ARCH_HAS_PREEMPT_LAZY
4546
select ARCH_HAS_PTDUMP
4647
select ARCH_HAS_PTE_DEVMAP
4748
select ARCH_HAS_PTE_SPECIAL
@@ -134,7 +135,6 @@ config ARM64
134135
select COMMON_CLK
135136
select CPU_PM if (SUSPEND || CPU_IDLE)
136137
select CPUMASK_OFFSTACK if NR_CPUS > 256
137-
select CRC32
138138
select DCACHE_WORD_ACCESS
139139
select DYNAMIC_FTRACE if FUNCTION_TRACER
140140
select DMA_BOUNCE_UNALIGNED_KMALLOC
@@ -333,9 +333,9 @@ config ARCH_MMAP_RND_BITS_MAX
333333
default 24 if ARM64_VA_BITS=39
334334
default 27 if ARM64_VA_BITS=42
335335
default 30 if ARM64_VA_BITS=47
336-
default 29 if ARM64_VA_BITS=48 && ARM64_64K_PAGES
337-
default 31 if ARM64_VA_BITS=48 && ARM64_16K_PAGES
338-
default 33 if ARM64_VA_BITS=48
336+
default 29 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52) && ARM64_64K_PAGES
337+
default 31 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52) && ARM64_16K_PAGES
338+
default 33 if (ARM64_VA_BITS=48 || ARM64_VA_BITS=52)
339339
default 14 if ARM64_64K_PAGES
340340
default 16 if ARM64_16K_PAGES
341341
default 18
@@ -2285,7 +2285,6 @@ config ARM64_SME
22852285
bool "ARM Scalable Matrix Extension support"
22862286
default y
22872287
depends on ARM64_SVE
2288-
depends on BROKEN
22892288
help
22902289
The Scalable Matrix Extension (SME) is an extension to the AArch64
22912290
execution state which utilises a substantial subset of the SVE

arch/arm64/include/asm/cpu.h

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,7 @@ struct cpuinfo_arm64 {
4444
u64 reg_dczid;
4545
u64 reg_midr;
4646
u64 reg_revidr;
47+
u64 reg_aidr;
4748
u64 reg_gmid;
4849
u64 reg_smidr;
4950
u64 reg_mpamidr;

arch/arm64/include/asm/cputype.h

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -134,6 +134,7 @@
134134

135135
#define HISI_CPU_PART_TSV110 0xD01
136136
#define HISI_CPU_PART_HIP09 0xD02
137+
#define HISI_CPU_PART_HIP12 0xD06
137138

138139
#define APPLE_CPU_PART_M1_ICESTORM 0x022
139140
#define APPLE_CPU_PART_M1_FIRESTORM 0x023
@@ -222,6 +223,7 @@
222223
#define MIDR_FUJITSU_A64FX MIDR_CPU_MODEL(ARM_CPU_IMP_FUJITSU, FUJITSU_CPU_PART_A64FX)
223224
#define MIDR_HISI_TSV110 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_TSV110)
224225
#define MIDR_HISI_HIP09 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP09)
226+
#define MIDR_HISI_HIP12 MIDR_CPU_MODEL(ARM_CPU_IMP_HISI, HISI_CPU_PART_HIP12)
225227
#define MIDR_APPLE_M1_ICESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM)
226228
#define MIDR_APPLE_M1_FIRESTORM MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_FIRESTORM)
227229
#define MIDR_APPLE_M1_ICESTORM_PRO MIDR_CPU_MODEL(ARM_CPU_IMP_APPLE, APPLE_CPU_PART_M1_ICESTORM_PRO)

arch/arm64/include/asm/el2_setup.h

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -204,19 +204,21 @@
204204
orr x0, x0, #(1 << 62)
205205

206206
.Lskip_spe_fgt_\@:
207+
208+
.Lset_debug_fgt_\@:
207209
msr_s SYS_HDFGRTR_EL2, x0
208210
msr_s SYS_HDFGWTR_EL2, x0
209211

210212
mov x0, xzr
211213
mrs x1, id_aa64pfr1_el1
212214
ubfx x1, x1, #ID_AA64PFR1_EL1_SME_SHIFT, #4
213-
cbz x1, .Lskip_debug_fgt_\@
215+
cbz x1, .Lskip_sme_fgt_\@
214216

215217
/* Disable nVHE traps of TPIDR2 and SMPRI */
216218
orr x0, x0, #HFGxTR_EL2_nSMPRI_EL1_MASK
217219
orr x0, x0, #HFGxTR_EL2_nTPIDR2_EL0_MASK
218220

219-
.Lskip_debug_fgt_\@:
221+
.Lskip_sme_fgt_\@:
220222
mrs_s x1, SYS_ID_AA64MMFR3_EL1
221223
ubfx x1, x1, #ID_AA64MMFR3_EL1_S1PIE_SHIFT, #4
222224
cbz x1, .Lskip_pie_fgt_\@
@@ -237,12 +239,14 @@
237239
/* GCS depends on PIE so we don't check it if PIE is absent */
238240
mrs_s x1, SYS_ID_AA64PFR1_EL1
239241
ubfx x1, x1, #ID_AA64PFR1_EL1_GCS_SHIFT, #4
240-
cbz x1, .Lset_fgt_\@
242+
cbz x1, .Lskip_gce_fgt_\@
241243

242244
/* Disable traps of access to GCS registers at EL0 and EL1 */
243245
orr x0, x0, #HFGxTR_EL2_nGCS_EL1_MASK
244246
orr x0, x0, #HFGxTR_EL2_nGCS_EL0_MASK
245247

248+
.Lskip_gce_fgt_\@:
249+
246250
.Lset_fgt_\@:
247251
msr_s SYS_HFGRTR_EL2, x0
248252
msr_s SYS_HFGWTR_EL2, x0

arch/arm64/include/asm/esr.h

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -378,12 +378,14 @@
378378
/*
379379
* ISS values for SME traps
380380
*/
381-
382-
#define ESR_ELx_SME_ISS_SME_DISABLED 0
383-
#define ESR_ELx_SME_ISS_ILL 1
384-
#define ESR_ELx_SME_ISS_SM_DISABLED 2
385-
#define ESR_ELx_SME_ISS_ZA_DISABLED 3
386-
#define ESR_ELx_SME_ISS_ZT_DISABLED 4
381+
#define ESR_ELx_SME_ISS_SMTC_MASK GENMASK(2, 0)
382+
#define ESR_ELx_SME_ISS_SMTC(esr) ((esr) & ESR_ELx_SME_ISS_SMTC_MASK)
383+
384+
#define ESR_ELx_SME_ISS_SMTC_SME_DISABLED 0
385+
#define ESR_ELx_SME_ISS_SMTC_ILL 1
386+
#define ESR_ELx_SME_ISS_SMTC_SM_DISABLED 2
387+
#define ESR_ELx_SME_ISS_SMTC_ZA_DISABLED 3
388+
#define ESR_ELx_SME_ISS_SMTC_ZT_DISABLED 4
387389

388390
/* ISS field definitions for MOPS exceptions */
389391
#define ESR_ELx_MOPS_ISS_MEM_INST (UL(1) << 24)

arch/arm64/include/asm/fpsimd.h

Lines changed: 47 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66
#define __ASM_FP_H
77

88
#include <asm/errno.h>
9+
#include <asm/percpu.h>
910
#include <asm/ptrace.h>
1011
#include <asm/processor.h>
1112
#include <asm/sigcontext.h>
@@ -76,7 +77,6 @@ extern void fpsimd_load_state(struct user_fpsimd_state *state);
7677
extern void fpsimd_thread_switch(struct task_struct *next);
7778
extern void fpsimd_flush_thread(void);
7879

79-
extern void fpsimd_signal_preserve_current_state(void);
8080
extern void fpsimd_preserve_current_state(void);
8181
extern void fpsimd_restore_current_state(void);
8282
extern void fpsimd_update_current_state(struct user_fpsimd_state const *state);
@@ -93,9 +93,12 @@ struct cpu_fp_state {
9393
enum fp_type to_save;
9494
};
9595

96+
DECLARE_PER_CPU(struct cpu_fp_state, fpsimd_last_state);
97+
9698
extern void fpsimd_bind_state_to_cpu(struct cpu_fp_state *fp_state);
9799

98100
extern void fpsimd_flush_task_state(struct task_struct *target);
101+
extern void fpsimd_save_and_flush_current_state(void);
99102
extern void fpsimd_save_and_flush_cpu_state(void);
100103

101104
static inline bool thread_sm_enabled(struct thread_struct *thread)
@@ -108,6 +111,8 @@ static inline bool thread_za_enabled(struct thread_struct *thread)
108111
return system_supports_sme() && (thread->svcr & SVCR_ZA_MASK);
109112
}
110113

114+
extern void task_smstop_sm(struct task_struct *task);
115+
111116
/* Maximum VL that SVE/SME VL-agnostic software can transparently support */
112117
#define VL_ARCH_MAX 0x100
113118

@@ -195,10 +200,8 @@ struct vl_info {
195200

196201
extern void sve_alloc(struct task_struct *task, bool flush);
197202
extern void fpsimd_release_task(struct task_struct *task);
198-
extern void fpsimd_sync_to_sve(struct task_struct *task);
199-
extern void fpsimd_force_sync_to_sve(struct task_struct *task);
200-
extern void sve_sync_to_fpsimd(struct task_struct *task);
201-
extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task);
203+
extern void fpsimd_sync_from_effective_state(struct task_struct *task);
204+
extern void fpsimd_sync_to_effective_state_zeropad(struct task_struct *task);
202205

203206
extern int vec_set_vector_length(struct task_struct *task, enum vec_type type,
204207
unsigned long vl, unsigned long flags);
@@ -292,14 +295,29 @@ static inline bool sve_vq_available(unsigned int vq)
292295
return vq_available(ARM64_VEC_SVE, vq);
293296
}
294297

295-
size_t sve_state_size(struct task_struct const *task);
298+
static inline size_t __sve_state_size(unsigned int sve_vl, unsigned int sme_vl)
299+
{
300+
unsigned int vl = max(sve_vl, sme_vl);
301+
return SVE_SIG_REGS_SIZE(sve_vq_from_vl(vl));
302+
}
303+
304+
/*
305+
* Return how many bytes of memory are required to store the full SVE
306+
* state for task, given task's currently configured vector length.
307+
*/
308+
static inline size_t sve_state_size(struct task_struct const *task)
309+
{
310+
unsigned int sve_vl = task_get_sve_vl(task);
311+
unsigned int sme_vl = task_get_sme_vl(task);
312+
return __sve_state_size(sve_vl, sme_vl);
313+
}
296314

297315
#else /* ! CONFIG_ARM64_SVE */
298316

299317
static inline void sve_alloc(struct task_struct *task, bool flush) { }
300318
static inline void fpsimd_release_task(struct task_struct *task) { }
301-
static inline void sve_sync_to_fpsimd(struct task_struct *task) { }
302-
static inline void sve_sync_from_fpsimd_zeropad(struct task_struct *task) { }
319+
static inline void fpsimd_sync_from_effective_state(struct task_struct *task) { }
320+
static inline void fpsimd_sync_to_effective_state_zeropad(struct task_struct *task) { }
303321

304322
static inline int sve_max_virtualisable_vl(void)
305323
{
@@ -333,6 +351,11 @@ static inline void vec_update_vq_map(enum vec_type t) { }
333351
static inline int vec_verify_vq_map(enum vec_type t) { return 0; }
334352
static inline void sve_setup(void) { }
335353

354+
static inline size_t __sve_state_size(unsigned int sve_vl, unsigned int sme_vl)
355+
{
356+
return 0;
357+
}
358+
336359
static inline size_t sve_state_size(struct task_struct const *task)
337360
{
338361
return 0;
@@ -385,22 +408,24 @@ extern int sme_set_current_vl(unsigned long arg);
385408
extern int sme_get_current_vl(void);
386409
extern void sme_suspend_exit(void);
387410

411+
static inline size_t __sme_state_size(unsigned int sme_vl)
412+
{
413+
size_t size = ZA_SIG_REGS_SIZE(sve_vq_from_vl(sme_vl));
414+
415+
if (system_supports_sme2())
416+
size += ZT_SIG_REG_SIZE;
417+
418+
return size;
419+
}
420+
388421
/*
389422
* Return how many bytes of memory are required to store the full SME
390423
* specific state for task, given task's currently configured vector
391424
* length.
392425
*/
393426
static inline size_t sme_state_size(struct task_struct const *task)
394427
{
395-
unsigned int vl = task_get_sme_vl(task);
396-
size_t size;
397-
398-
size = ZA_SIG_REGS_SIZE(sve_vq_from_vl(vl));
399-
400-
if (system_supports_sme2())
401-
size += ZT_SIG_REG_SIZE;
402-
403-
return size;
428+
return __sme_state_size(task_get_sme_vl(task));
404429
}
405430

406431
#else
@@ -421,6 +446,11 @@ static inline int sme_set_current_vl(unsigned long arg) { return -EINVAL; }
421446
static inline int sme_get_current_vl(void) { return -EINVAL; }
422447
static inline void sme_suspend_exit(void) { }
423448

449+
static inline size_t __sme_state_size(unsigned int sme_vl)
450+
{
451+
return 0;
452+
}
453+
424454
static inline size_t sme_state_size(struct task_struct const *task)
425455
{
426456
return 0;

arch/arm64/include/asm/hugetlb.h

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -69,29 +69,38 @@ extern void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
6969

7070
#include <asm-generic/hugetlb.h>
7171

72-
#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
73-
static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
74-
unsigned long start,
75-
unsigned long end)
72+
static inline void __flush_hugetlb_tlb_range(struct vm_area_struct *vma,
73+
unsigned long start,
74+
unsigned long end,
75+
unsigned long stride,
76+
bool last_level)
7677
{
77-
unsigned long stride = huge_page_size(hstate_vma(vma));
78-
7978
switch (stride) {
8079
#ifndef __PAGETABLE_PMD_FOLDED
8180
case PUD_SIZE:
82-
__flush_tlb_range(vma, start, end, PUD_SIZE, false, 1);
81+
__flush_tlb_range(vma, start, end, PUD_SIZE, last_level, 1);
8382
break;
8483
#endif
8584
case CONT_PMD_SIZE:
8685
case PMD_SIZE:
87-
__flush_tlb_range(vma, start, end, PMD_SIZE, false, 2);
86+
__flush_tlb_range(vma, start, end, PMD_SIZE, last_level, 2);
8887
break;
8988
case CONT_PTE_SIZE:
90-
__flush_tlb_range(vma, start, end, PAGE_SIZE, false, 3);
89+
__flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, 3);
9190
break;
9291
default:
93-
__flush_tlb_range(vma, start, end, PAGE_SIZE, false, TLBI_TTL_UNKNOWN);
92+
__flush_tlb_range(vma, start, end, PAGE_SIZE, last_level, TLBI_TTL_UNKNOWN);
9493
}
9594
}
9695

96+
#define __HAVE_ARCH_FLUSH_HUGETLB_TLB_RANGE
97+
static inline void flush_hugetlb_tlb_range(struct vm_area_struct *vma,
98+
unsigned long start,
99+
unsigned long end)
100+
{
101+
unsigned long stride = huge_page_size(hstate_vma(vma));
102+
103+
__flush_hugetlb_tlb_range(vma, start, end, stride, false);
104+
}
105+
97106
#endif /* __ASM_HUGETLB_H */

0 commit comments

Comments
 (0)