Skip to content

Commit 3ee7d94

Browse files
Christoph Hellwigmszyprow
authored andcommitted
docs: core-api: document the IOVA-based API
Add an explanation of the newly added IOVA-based mapping API. Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
1 parent 5f3b133 commit 3ee7d94

File tree

1 file changed

+71
-0
lines changed

1 file changed

+71
-0
lines changed

Documentation/core-api/dma-api.rst

Lines changed: 71 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -530,6 +530,77 @@ routines, e.g.:::
530530
....
531531
}
532532

533+
Part Ie - IOVA-based DMA mappings
534+
---------------------------------
535+
536+
These APIs allow a very efficient mapping when using an IOMMU. They are an
537+
optional path that requires extra code and are only recommended for drivers
538+
where DMA mapping performance, or the space usage for storing the DMA addresses
539+
matter. All the considerations from the previous section apply here as well.
540+
541+
::
542+
543+
bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
544+
phys_addr_t phys, size_t size);
545+
546+
Is used to try to allocate IOVA space for mapping operation. If it returns
547+
false this API can't be used for the given device and the normal streaming
548+
DMA mapping API should be used. The ``struct dma_iova_state`` is allocated
549+
by the driver and must be kept around until unmap time.
550+
551+
::
552+
553+
static inline bool dma_use_iova(struct dma_iova_state *state)
554+
555+
Can be used by the driver to check if the IOVA-based API is used after a
556+
call to dma_iova_try_alloc. This can be useful in the unmap path.
557+
558+
::
559+
560+
int dma_iova_link(struct device *dev, struct dma_iova_state *state,
561+
phys_addr_t phys, size_t offset, size_t size,
562+
enum dma_data_direction dir, unsigned long attrs);
563+
564+
Is used to link ranges to the IOVA previously allocated. The start of all
565+
but the first call to dma_iova_link for a given state must be aligned
566+
to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and
567+
the size of all but the last range must be aligned to the DMA merge boundary
568+
as well.
569+
570+
::
571+
572+
int dma_iova_sync(struct device *dev, struct dma_iova_state *state,
573+
size_t offset, size_t size);
574+
575+
Must be called to sync the IOMMU page tables for IOVA-range mapped by one or
576+
more calls to ``dma_iova_link()``.
577+
578+
For drivers that use a one-shot mapping, all ranges can be unmapped and the
579+
IOVA freed by calling:
580+
581+
::
582+
583+
void dma_iova_destroy(struct device *dev, struct dma_iova_state *state,
584+
size_t mapped_len, enum dma_data_direction dir,
585+
unsigned long attrs);
586+
587+
Alternatively drivers can dynamically manage the IOVA space by unmapping
588+
and mapping individual regions. In that case
589+
590+
::
591+
592+
void dma_iova_unlink(struct device *dev, struct dma_iova_state *state,
593+
size_t offset, size_t size, enum dma_data_direction dir,
594+
unsigned long attrs);
595+
596+
is used to unmap a range previously mapped, and
597+
598+
::
599+
600+
void dma_iova_free(struct device *dev, struct dma_iova_state *state);
601+
602+
is used to free the IOVA space. All regions must have been unmapped using
603+
``dma_iova_unlink()`` before calling ``dma_iova_free()``.
533604

534605
Part II - Non-coherent DMA allocations
535606
--------------------------------------

0 commit comments

Comments
 (0)