调用者名称 | 描述 |
__xa_erase | __xa_erase() - Erase this entry from the XArray while locked.*@xa: XArray.*@index: Index into array.* After this function returns, loading from @index will return %NULL.* If the index is part of a multi-index entry, all indices will be erased |
__xa_store | __xa_store() - Store this entry in the XArray |
__xa_cmpxchg | __xa_cmpxchg() - Store this entry in the XArray |
__xa_insert | __xa_insert() - Store this entry in the XArray if no entry is present.*@xa: XArray.*@index: Index into array.*@entry: New entry.*@gfp: Memory allocation flags.* Inserting a NULL entry will store a reserved entry (like xa_reserve())* if no entry is present |
xa_store_range | xa_store_range() - Store this entry at a range of indices in the XArray |
__xa_alloc | |
ida_alloc_range | da_alloc_range() - Allocate an unused ID.*@ida: IDA handle.*@min: Lowest ID to allocate.*@max: Highest ID to allocate.*@gfp: Memory allocation flags.* Allocate an ID between @min and @max, inclusive. The allocated ID will |
ida_free | da_free() - Release an allocated ID.*@ida: IDA handle.*@id: Previously allocated ID.* Context: Any context. |
ida_destroy | 释放所有缓存层内IDA树 |
xa_store_order | If anyone needs this, please move it to xarray.c. We have no current* users outside the test suite because all current multislot users want* to use the advanced API. |
check_xas_retry | |
check_xa_shrink | |
check_xas_erase | |
check_multi_store_1 | |
check_multi_store_2 | |
__check_store_iter | |
xa_store_many_order | |
check_create_range_4 | |
shadow_remove | |
check_workingset | |
page_cache_delete_batch | page_cache_delete_batch - delete several pages from page cache*@mapping: the mapping to which pages belong*@pvec: pagevec with pages to delete* The function walks over mapping->i_pages and removes pages passed in @pvec* from the mapping |
replace_page_cache_page | place_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one |
__add_to_page_cache_locked | |
__clear_shadow_entry | Regular page slots are stabilized by the page lock even without the tree* itself locked. These unlocked entries need verification under the tree* lock. |
shadow_lru_isolate | |
add_to_swap_cache | add_to_swap_cache resembles add_to_page_cache_locked on swapper_space,* but sets SwapCache flag and private instead of mapping and index. |
__delete_from_swap_cache | This must be called only on pages that have* been verified to be in the swap cache. |
migrate_page_move_mapping | Replace the page in the mapping.* The number of remaining references must be:* 1 for anonymous pages without a mapping* 2 for pages with a mapping* 3 for pages with a mapping and PagePrivate/PagePrivate2 set. |
migrate_huge_page_move_mapping | The expected number of remaining references is the same as that* of migrate_page_move_mapping(). |
dax_lock_entry | Return: The entry stored at this location before it was locked. |
grab_mapping_entry | Find page cache entry at given index. If it is a DAX entry, return it* with the entry locked. If the page cache doesn't contain an entry at* that index, add a locked empty entry.* When requesting an entry with size DAX_PMD, grab_mapping_entry() will |
__dax_invalidate_entry | |
dax_writeback_one | |
page_cache_delete | Lock ordering:* ->i_mmap_rwsem (truncate_pagecache)* ->private_lock (__free_pte->__set_page_dirty_buffers)* ->swap_lock (exclusive_swap_page, others)* ->i_pages lock* ->i_mutex* ->i_mmap_rwsem (truncate->unmap_mapping_range)* ->mmap_sem* ->i_mmap_rwsem |
__xa_alloc | __xa_alloc() - Find somewhere to store this entry in the XArray |
dax_unlock_entry | We used the xa_state to get the entry, but then we locked the entry and* dropped the xa_lock, so we know the xa_state is stale and must be reset* before use. |