Skip to content

Commit 895c074

Browse files
niklas88awilliam
authored andcommitted
vfio/type1: Respect IOMMU reserved regions in vfio_test_domain_fgsp()
Since commit cbf7827 ("iommu/s390: Fix potential s390_domain aperture shrinking") the s390 IOMMU driver uses reserved regions for the system provided DMA ranges of PCI devices. Previously it reduced the size of the IOMMU aperture and checked it on each mapping operation. On current machines the system denies use of DMA addresses below 2^32 for all PCI devices. Usually mapping IOVAs in a reserved regions is harmless until a DMA actually tries to utilize the mapping. However on s390 there is a virtual PCI device called ISM which is implemented in firmware and used for cross LPAR communication. Unlike real PCI devices this device does not use the hardware IOMMU but inspects IOMMU translation tables directly on IOTLB flush (s390 RPCIT instruction). If it detects IOVA mappings outside the allowed ranges it goes into an error state. This error state then causes the device to be unavailable to the KVM guest. Analysing this we found that vfio_test_domain_fgsp() maps 2 pages at DMA address 0 irrespective of the IOMMUs reserved regions. Even if usually harmless this seems wrong in the general case so instead go through the freshly updated IOVA list and try to find a range that isn't reserved, and fits 2 pages, is PAGE_SIZE * 2 aligned. If found use that for testing for fine grained super pages. Fixes: af02916 ("vfio/type1: Check reserved region conflict and update iova list") Signed-off-by: Niklas Schnelle <schnelle@linux.ibm.com> Reviewed-by: Matthew Rosato <mjrosato@linux.ibm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20230110164427.4051938-2-schnelle@linux.ibm.com Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
1 parent b7bfaa7 commit 895c074

File tree

1 file changed

+20
-11
lines changed

1 file changed

+20
-11
lines changed

drivers/vfio/vfio_iommu_type1.c

Lines changed: 20 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1856,24 +1856,33 @@ static int vfio_iommu_replay(struct vfio_iommu *iommu,
18561856
* significantly boosts non-hugetlbfs mappings and doesn't seem to hurt when
18571857
* hugetlbfs is in use.
18581858
*/
1859-
static void vfio_test_domain_fgsp(struct vfio_domain *domain)
1859+
static void vfio_test_domain_fgsp(struct vfio_domain *domain, struct list_head *regions)
18601860
{
1861-
struct page *pages;
18621861
int ret, order = get_order(PAGE_SIZE * 2);
1862+
struct vfio_iova *region;
1863+
struct page *pages;
1864+
dma_addr_t start;
18631865

18641866
pages = alloc_pages(GFP_KERNEL | __GFP_ZERO, order);
18651867
if (!pages)
18661868
return;
18671869

1868-
ret = iommu_map(domain->domain, 0, page_to_phys(pages), PAGE_SIZE * 2,
1869-
IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
1870-
if (!ret) {
1871-
size_t unmapped = iommu_unmap(domain->domain, 0, PAGE_SIZE);
1870+
list_for_each_entry(region, regions, list) {
1871+
start = ALIGN(region->start, PAGE_SIZE * 2);
1872+
if (start >= region->end || (region->end - start < PAGE_SIZE * 2))
1873+
continue;
18721874

1873-
if (unmapped == PAGE_SIZE)
1874-
iommu_unmap(domain->domain, PAGE_SIZE, PAGE_SIZE);
1875-
else
1876-
domain->fgsp = true;
1875+
ret = iommu_map(domain->domain, start, page_to_phys(pages), PAGE_SIZE * 2,
1876+
IOMMU_READ | IOMMU_WRITE | IOMMU_CACHE);
1877+
if (!ret) {
1878+
size_t unmapped = iommu_unmap(domain->domain, start, PAGE_SIZE);
1879+
1880+
if (unmapped == PAGE_SIZE)
1881+
iommu_unmap(domain->domain, start + PAGE_SIZE, PAGE_SIZE);
1882+
else
1883+
domain->fgsp = true;
1884+
}
1885+
break;
18771886
}
18781887

18791888
__free_pages(pages, order);
@@ -2326,7 +2335,7 @@ static int vfio_iommu_type1_attach_group(void *iommu_data,
23262335
}
23272336
}
23282337

2329-
vfio_test_domain_fgsp(domain);
2338+
vfio_test_domain_fgsp(domain, &iova_copy);
23302339

23312340
/* replay mappings on new domains */
23322341
ret = vfio_iommu_replay(iommu, domain);

0 commit comments

Comments
 (0)