Skip to content

Commit a5e3b12

Browse files
Petr TesarikChristoph Hellwig
authored andcommitted
swiotlb: do not free decrypted pages if dynamic
Fix these two error paths: 1. When set_memory_decrypted() fails, pages may be left fully or partially decrypted. 2. Decrypted pages may be freed if swiotlb_alloc_tlb() determines that the physical address is too high. To fix the first issue, call set_memory_encrypted() on the allocated region after a failed decryption attempt. If that also fails, leak the pages. To fix the second issue, check that the TLB physical address is below the requested limit before decrypting. Let the caller differentiate between unsuitable physical address (=> retry from a lower zone) and allocation failures (=> no point in retrying). Cc: stable@vger.kernel.org Fixes: 79636ca ("swiotlb: if swiotlb is full, fall back to a transient memory pool") Signed-off-by: Petr Tesarik <petr.tesarik1@huawei-partners.com> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
1 parent 8f6f76a commit a5e3b12

File tree

1 file changed

+16
-9
lines changed

1 file changed

+16
-9
lines changed

kernel/dma/swiotlb.c

Lines changed: 16 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -558,29 +558,40 @@ void __init swiotlb_exit(void)
558558
* alloc_dma_pages() - allocate pages to be used for DMA
559559
* @gfp: GFP flags for the allocation.
560560
* @bytes: Size of the buffer.
561+
* @phys_limit: Maximum allowed physical address of the buffer.
561562
*
562563
* Allocate pages from the buddy allocator. If successful, make the allocated
563564
* pages decrypted that they can be used for DMA.
564565
*
565-
* Return: Decrypted pages, or %NULL on failure.
566+
* Return: Decrypted pages, %NULL on allocation failure, or ERR_PTR(-EAGAIN)
567+
* if the allocated physical address was above @phys_limit.
566568
*/
567-
static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes)
569+
static struct page *alloc_dma_pages(gfp_t gfp, size_t bytes, u64 phys_limit)
568570
{
569571
unsigned int order = get_order(bytes);
570572
struct page *page;
573+
phys_addr_t paddr;
571574
void *vaddr;
572575

573576
page = alloc_pages(gfp, order);
574577
if (!page)
575578
return NULL;
576579

577-
vaddr = page_address(page);
580+
paddr = page_to_phys(page);
581+
if (paddr + bytes - 1 > phys_limit) {
582+
__free_pages(page, order);
583+
return ERR_PTR(-EAGAIN);
584+
}
585+
586+
vaddr = phys_to_virt(paddr);
578587
if (set_memory_decrypted((unsigned long)vaddr, PFN_UP(bytes)))
579588
goto error;
580589
return page;
581590

582591
error:
583-
__free_pages(page, order);
592+
/* Intentional leak if pages cannot be encrypted again. */
593+
if (!set_memory_encrypted((unsigned long)vaddr, PFN_UP(bytes)))
594+
__free_pages(page, order);
584595
return NULL;
585596
}
586597

@@ -618,11 +629,7 @@ static struct page *swiotlb_alloc_tlb(struct device *dev, size_t bytes,
618629
else if (phys_limit <= DMA_BIT_MASK(32))
619630
gfp |= __GFP_DMA32;
620631

621-
while ((page = alloc_dma_pages(gfp, bytes)) &&
622-
page_to_phys(page) + bytes - 1 > phys_limit) {
623-
/* allocated, but too high */
624-
__free_pages(page, get_order(bytes));
625-
632+
while (IS_ERR(page = alloc_dma_pages(gfp, bytes, phys_limit))) {
626633
if (IS_ENABLED(CONFIG_ZONE_DMA32) &&
627634
phys_limit < DMA_BIT_MASK(64) &&
628635
!(gfp & (__GFP_DMA32 | __GFP_DMA)))

0 commit comments

Comments
 (0)