Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmalloc.c Create Date:2022-07-28 14:59:31
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:vb_free

Proto:static void vb_free(const void *addr, unsigned long size)

Type:void

Parameter:

TypeParameterName
const void *addr
unsigned longsize
1666  BUG_ON(offset_in_page(size))
1667  BUG_ON(size > PAGE_SIZE * 256K with 4K pages )
1669  flush_cache_vunmap((unsignedlong)addr, (unsignedlong)addr + size)
1671  order = get_order - Determine the allocation order of a memory size*@size: The size for which to get the order* Determine the allocation order of a particular sized block of memory
1673  offset = addr & VMAP_BLOCK_SIZE - 1
1674  offset >>= PAGE_SHIFT determines the page size
1676  vb_idx = We should probably have a fallback mechanism to allocate virtual memory* out of partially filled vmap blocks. However vmap block sizing should be* fairly reasonable according to the vmalloc size, so it shouldn't be a* big problem.
1677  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
1678  vb = adix_tree_lookup - perform lookup operation on a radix tree*@root: radix tree root*@index: index key* Lookup the item at the position @index in the radix tree @root.* This function can be called under rcu_read_lock, however the caller
1679  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
1680  BUG_ON(!vb)
1682  vunmap_page_range((unsignedlong)addr, (unsignedlong)addr + size)
1684  If For use in fast paths after init_debug_pagealloc() has run, or when a* false negative result is not harmful when called too early. Then flush_tlb_kernel_range((unsignedlong)addr, (unsignedlong)addr + size)
1688  spin_lock( & lock)
1691  dirty_min = min - return minimum of two values of the same or compatible types*@x: first value*@y: second value(dirty_min, offset)
1692  < dirty range = max - return maximum of two values of the same or compatible types*@x: first value*@y: second value(< dirty range , offset + (1UL << order))
1694  dirty += 1UL << order
1695  If dirty == VMAP_BBMAP_BITS Then
1696  BUG_ON(free)
1697  spin_unlock( & lock)
1698  free_vmap_block(vb)
1699  Else spin_unlock( & lock)
Caller
NameDescribe
vm_unmap_ramvm_unmap_ram - unmap linear kernel address space set up by vm_map_ram*@mem: the pointer returned by vm_map_ram*@count: the count passed to that vm_map_ram call (cannot unmap partial)