Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\dma\swiotlb.c Create Date:2022-07-28 10:36:14
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:swiotlb_tbl_map_single

Proto:phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, dma_addr_t tbl_dma_addr, phys_addr_t orig_addr, size_t mapping_size, size_t alloc_size, enum dma_data_direction dir, unsigned long attrs)

Type:phys_addr_t

Parameter:

TypeParameterName
struct device *hwdev
dma_addr_ttbl_dma_addr
phys_addr_torig_addr
size_tmapping_size
size_talloc_size
enum dma_data_directiondir
unsigned longattrs
462  If no_iotlb_memory Then panic - halt the system*@fmt: The text string to print* Display a message, then perform cleanups.* This function never returns.
465  If mem_encrypt_active() Then pr_warn_once("Memory encryption is active and system is using DMA bounce buffers\n")
468  If mapping_size > alloc_size Then
469  dev_warn_once(hwdev, "Invalid sizes (mapping: %zd bytes, alloc: %zd bytes)", mapping_size, alloc_size)
471  Return DMA_MAPPING_ERROR
474  mask = dma_get_seg_boundary(hwdev)
476  tbl_dma_addr &= mask
478  offset_slots = @a is a power of 2 value (tbl_dma_addr, 1 << log of the size of each IO TLB slab. The number of slabs is command line* controllable.) >> log of the size of each IO TLB slab. The number of slabs is command line* controllable.
483  max_slots = If mask + 1 Then @a is a power of 2 value (mask + 1, 1 << log of the size of each IO TLB slab. The number of slabs is command line* controllable.) >> log of the size of each IO TLB slab. The number of slabs is command line* controllable. Else 1UL << BITS_PER_LONG - log of the size of each IO TLB slab. The number of slabs is command line* controllable.
491  nslots = @a is a power of 2 value (alloc_size, 1 << log of the size of each IO TLB slab. The number of slabs is command line* controllable.) >> log of the size of each IO TLB slab. The number of slabs is command line* controllable.
492  If alloc_size >= PAGE_SIZE Then stride = 1 << PAGE_SHIFT determines the page size - log of the size of each IO TLB slab. The number of slabs is command line* controllable.
494  Else stride = 1
497  BUG_ON(!nslots)
503  spin_lock_irqsave( & Protect the above data structures in the map and unmap calls, flags)
505  If Value for the false possibility is greater at compile time(nslots > The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. - The number of used IO TLB block) Then Go to not_found
508  index = @a is a power of 2 value (io_tlb_index, stride)
509  If index >= The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. Then index = 0
511  wrap = index
513  Do
516  index += stride
519  If index == wrap Then Go to not_found
529  count = 0
544  Go to found
546  index += stride
547  If index >= The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. Then index = 0
549  When index != wrap cycle
551  not_found :
552  tmp_io_tlb_used = The number of used IO TLB block
554  spin_unlock_irqrestore( & Protect the above data structures in the map and unmap calls, flags)
555  If Not (attrs & DMA_ATTR_NO_WARN: This tells the DMA-mapping subsystem to suppress* allocation failure reports (similarly to __GFP_NOWARN).) && printk_ratelimit() Then dev_warn(hwdev, "swiotlb buffer is full (sz: %zd bytes), total %lu (slots), used %lu (slots)\n", alloc_size, The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages., tmp_io_tlb_used)
558  Return DMA_MAPPING_ERROR
559  found :
560  The number of used IO TLB block += nslots
561  spin_unlock_irqrestore( & Protect the above data structures in the map and unmap calls, flags)
568  When i < nslots cycle io_tlb_orig_addr[index + i] = orig_addr + (i << log of the size of each IO TLB slab. The number of slabs is command line* controllable.)
570  If Not (attrs & DMA_ATTR_SKIP_CPU_SYNC: Allows platform code to skip synchronization of* the CPU cache for the given buffer assuming that it has been already* transferred to 'device' domain.) && ( dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL ) Then Bounce: copy the swiotlb buffer from or back to the original dma location
574  Return tlb_addr
Caller
NameDescribe
swiotlb_mapCreate a swiotlb mapping for the buffer at @phys, and in case of DMAing* to the device copy the data into it as well.