Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\dma\swiotlb.c Create Date:2022-07-28 10:36:04
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:swiotlb_late_init_with_tbl

Proto:int swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)

Type:int

Parameter:

TypeParameterName
char *tlb
unsigned longnslabs
333  bytes = nslabs << log of the size of each IO TLB slab. The number of slabs is command line* controllable.
335  The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. = nslabs
336  Used to do a quick range check in swiotlb_tbl_unmap_single and* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this* API. = virt_to_phys - map virtual addresses to physical*@address: address to remap* The returned physical address is the physical (CPU) mapping for* the memory address given. It is only valid to use this function on
337  Used to do a quick range check in swiotlb_tbl_unmap_single and* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this* API. = Used to do a quick range check in swiotlb_tbl_unmap_single and* swiotlb_tbl_sync_single_*, to see if the memory was in fact allocated by this* API. + bytes
339  set_memory_decrypted((unsignedlong)tlb, bytes >> PAGE_SHIFT determines the page size )
340  memset(tlb, 0, bytes)
347  This is a free list describing the number of free entries available from* each index = __get_free_pages(GFP_KERNEL, get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory)
349  If Not This is a free list describing the number of free entries available from* each index Then Go to cleanup3
352  io_tlb_orig_addr = __get_free_pages(GFP_KERNEL, get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory)
356  If Not io_tlb_orig_addr Then Go to cleanup4
359  When i < The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. cycle
360  This is a free list describing the number of free entries available from* each index[i] = Maximum allowable number of contiguous slabs to map,* must be a power of 2. What is the appropriate value ?* The complexity of {map,unmap}_single is linearly dependent on this value. - OFFSET(i, Maximum allowable number of contiguous slabs to map,* must be a power of 2. What is the appropriate value ?* The complexity of {map,unmap}_single is linearly dependent on this value.)
361  io_tlb_orig_addr[i] = We need to save away the original address corresponding to a mapped entry* for the sync operations.
363  io_tlb_index = 0
365  swiotlb_print_info()
367  late_alloc = 1
369  swiotlb_set_max_segment(The number of IO TLB blocks (in groups of 64) between io_tlb_start and* io_tlb_end. This is command line adjustable via setup_io_tlb_npages. << log of the size of each IO TLB slab. The number of slabs is command line* controllable.)
371  Return 0
373  cleanup4 :
374  free_pages((unsignedlong)This is a free list describing the number of free entries available from* each index, get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory)
376  This is a free list describing the number of free entries available from* each index = NULL
377  cleanup3 :
378  swiotlb_cleanup()
379  Return -ENOMEM
Caller
NameDescribe
swiotlb_late_init_with_default_sizeSystems with larger DMA zones (those that don't support ISA) can* initialize the swiotlb later using the slab allocator if needed.* This should be just like above, but with some error catching.