Skip to content

Commit 2042c35

Browse files
Balbir Singhmszyprow
authored andcommitted
dma/mapping.c: dev_dbg support for dma_addressing_limited
In the debug and resolution of an issue involving forced use of bounce buffers, 7170130 ("x86/mm/init: Handle the special case of device private pages in add_pages(), to not increase max_pfn and trigger dma_addressing_limited() bounce buffers"). It would have been easier to debug the issue if dma_addressing_limited() had debug information about the device not being able to address all of memory and thus forcing all accesses through a bounce buffer. Please see[2] Implement dev_dbg to debug the potential use of bounce buffers when we hit the condition. When swiotlb is used, dma_addressing_limited() is used to determine the size of maximum dma buffer size in dma_direct_max_mapping_size(). The debug prints could be triggered in that check as well (when enabled). Link: https://lore.kernel.org/lkml/20250401000752.249348-1-balbirs@nvidia.com/ [1] Link: https://lore.kernel.org/lkml/20250310112206.4168-1-spasswolf@web.de/ [2] Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Christian König" <christian.koenig@amd.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Kees Cook <kees@kernel.org> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Alex Deucher <alexander.deucher@amd.com> Cc: Bert Karwatzki <spasswolf@web.de> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Balbir Singh <balbirs@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Link: https://lore.kernel.org/r/20250414113752.3298276-1-balbirs@nvidia.com
1 parent d7b98ae commit 2042c35

File tree

1 file changed

+10
-1
lines changed

1 file changed

+10
-1
lines changed

kernel/dma/mapping.c

Lines changed: 10 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -918,7 +918,7 @@ EXPORT_SYMBOL(dma_set_coherent_mask);
918918
* the system, else %false. Lack of addressing bits is the prime reason for
919919
* bounce buffering, but might not be the only one.
920920
*/
921-
bool dma_addressing_limited(struct device *dev)
921+
static bool __dma_addressing_limited(struct device *dev)
922922
{
923923
const struct dma_map_ops *ops = get_dma_ops(dev);
924924

@@ -930,6 +930,15 @@ bool dma_addressing_limited(struct device *dev)
930930
return false;
931931
return !dma_direct_all_ram_mapped(dev);
932932
}
933+
934+
bool dma_addressing_limited(struct device *dev)
935+
{
936+
if (!__dma_addressing_limited(dev))
937+
return false;
938+
939+
dev_dbg(dev, "device is DMA addressing limited\n");
940+
return true;
941+
}
933942
EXPORT_SYMBOL_GPL(dma_addressing_limited);
934943

935944
size_t dma_max_mapping_size(struct device *dev)

0 commit comments

Comments
 (0)