Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\dma\swiotlb.c Create Date:2022-07-28 10:36:00
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Systems with larger DMA zones (those that don't support ISA) can* initialize the swiotlb later using the slab allocator if needed.* This should be just like above, but with some error catching.

Proto:int swiotlb_late_init_with_default_size(size_t default_size)

Type:int

Parameter:

TypeParameterName
size_tdefault_size
279  req_nslabs = The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages.
280  unsigned char * vstart = NULL
282  rc = 0
284  If Not The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. Then
285  The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. = default_size >> log of the size of each IO TLB slab. The number of slabs is command line* controllable.
286  The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. = @a is a power of 2 value (The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages., Maximum allowable number of contiguous slabs to map,* must be a power of 2. What is the appropriate value ?* The complexity of {map,unmap}_single is linearly dependent on this value.)
292  order = get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory
293  The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. = SLABS_PER_PAGE << order
294  bytes = The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. << log of the size of each IO TLB slab. The number of slabs is command line* controllable.
296  When SLABS_PER_PAGE << order > Minimum IO TLB size to bother booting with. Systems with mainly* 64bit capable cards will only lightly use the swiotlb. If we can't* allocate a contiguous 1MB, we're probably in trouble anyway. cycle
297  vstart = __get_free_pages(GFP_DMA | DOC: Action modifiers* Action modifiers* ~~~~~~~~~~~~~~~~* %__GFP_NOWARN suppresses allocation failure reports.* %__GFP_COMP address compound page metadata.* %__GFP_ZERO returns a zeroed page on success., order)
299  If vstart Then Break
301  order--
304  If Not vstart Then
305  The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. = req_nslabs
306  Return -ENOMEM
308  If order != get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory Then
309  pr_warn("only able to allocate %ld MB\n", (PAGE_SIZE << order) >> 20)
311  The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. = SLABS_PER_PAGE << order
313  rc = swiotlb_late_init_with_tbl(vstart, The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages.)
314  If rc Then free_pages((unsignedlong)vstart, order)
317  Return rc