Skip to content

Commit e3a682e

Browse files
jgunthorpejoergroedel
authored andcommitted
iommu/amd: Fix corruption when mapping large pages from 0
If a page is mapped starting at 0 that is equal to or larger than can fit in the current mode (number of table levels) it results in corrupting the mapping as the following logic assumes the mode is correct for the page size being requested. There are two issues here, the check if the address fits within the table uses the start address, it should use the last address to ensure that last byte of the mapping fits within the current table mode. The second is if the mapping is exactly the size of the full page table it has to add another level to instead hold a single IOPTE for the large size. Since both corner cases require a 0 IOVA to be hit and doesn't start until a page size of 2^48 it is unlikely to ever hit in a real system. Reported-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/0-v1-27ab08d646a1+29-amd_0map_jgg@nvidia.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
1 parent 3f6eead commit e3a682e

File tree

1 file changed

+8
-3
lines changed

1 file changed

+8
-3
lines changed

drivers/iommu/amd/io_pgtable.c

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -118,6 +118,7 @@ static void free_sub_pt(u64 *root, int mode, struct list_head *freelist)
118118
*/
119119
static bool increase_address_space(struct amd_io_pgtable *pgtable,
120120
unsigned long address,
121+
unsigned int page_size_level,
121122
gfp_t gfp)
122123
{
123124
struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
@@ -133,7 +134,8 @@ static bool increase_address_space(struct amd_io_pgtable *pgtable,
133134

134135
spin_lock_irqsave(&domain->lock, flags);
135136

136-
if (address <= PM_LEVEL_SIZE(pgtable->mode))
137+
if (address <= PM_LEVEL_SIZE(pgtable->mode) &&
138+
pgtable->mode - 1 >= page_size_level)
137139
goto out;
138140

139141
ret = false;
@@ -163,18 +165,21 @@ static u64 *alloc_pte(struct amd_io_pgtable *pgtable,
163165
gfp_t gfp,
164166
bool *updated)
165167
{
168+
unsigned long last_addr = address + (page_size - 1);
166169
struct io_pgtable_cfg *cfg = &pgtable->pgtbl.cfg;
167170
int level, end_lvl;
168171
u64 *pte, *page;
169172

170173
BUG_ON(!is_power_of_2(page_size));
171174

172-
while (address > PM_LEVEL_SIZE(pgtable->mode)) {
175+
while (last_addr > PM_LEVEL_SIZE(pgtable->mode) ||
176+
pgtable->mode - 1 < PAGE_SIZE_LEVEL(page_size)) {
173177
/*
174178
* Return an error if there is no memory to update the
175179
* page-table.
176180
*/
177-
if (!increase_address_space(pgtable, address, gfp))
181+
if (!increase_address_space(pgtable, last_addr,
182+
PAGE_SIZE_LEVEL(page_size), gfp))
178183
return NULL;
179184
}
180185

0 commit comments

Comments
 (0)