Skip to content

Commit 4dbd6a3

Browse files
kelleymhsuryasaimadhu
authored andcommitted
x86/ioremap: Fix page aligned size calculation in __ioremap_caller()
Current code re-calculates the size after aligning the starting and ending physical addresses on a page boundary. But the re-calculation also embeds the masking of high order bits that exceed the size of the physical address space (via PHYSICAL_PAGE_MASK). If the masking removes any high order bits, the size calculation results in a huge value that is likely to immediately fail. Fix this by re-calculating the page-aligned size first. Then mask any high order bits using PHYSICAL_PAGE_MASK. Fixes: ffa71f3 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") Signed-off-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Borislav Petkov <bp@suse.de> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/1668624097-14884-2-git-send-email-mikelley@microsoft.com
1 parent 50bcceb commit 4dbd6a3

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

arch/x86/mm/ioremap.c

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -217,9 +217,15 @@ __ioremap_caller(resource_size_t phys_addr, unsigned long size,
217217
* Mappings have to be page-aligned
218218
*/
219219
offset = phys_addr & ~PAGE_MASK;
220-
phys_addr &= PHYSICAL_PAGE_MASK;
220+
phys_addr &= PAGE_MASK;
221221
size = PAGE_ALIGN(last_addr+1) - phys_addr;
222222

223+
/*
224+
* Mask out any bits not part of the actual physical
225+
* address, like memory encryption bits.
226+
*/
227+
phys_addr &= PHYSICAL_PAGE_MASK;
228+
223229
retval = memtype_reserve(phys_addr, (u64)phys_addr + size,
224230
pcm, &new_pcm);
225231
if (retval) {

0 commit comments

Comments
 (0)