Function report |
Source Code:include\linux\xarray.h |
Create Date:2022-07-28 05:42:09 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:xa_is_value() - Determine if an entry is a value.*@entry: XArray entry.* Context: Any context.* Return: True if the entry is a value, false if it is a pointer.
Proto:static inline bool xa_is_value(const void *entry)
Type:bool
Parameter:
Type | Parameter | Name |
---|---|---|
const void * | entry |
79 | Return entry & 1 |
Name | Describe |
---|---|
radix_tree_extend | Extend a radix tree so it can store key @index. |
insert_entries | |
__radix_tree_replace | |
__radix_tree_delete | |
xas_expand | xas_expand adds nodes to the head of the tree until it has reached* sufficient height to be able to contain @xas->xa_index |
xas_store | xas_store() - Store this entry in the XArray |
ida_alloc_range | da_alloc_range() - Allocate an unused ID.*@ida: IDA handle.*@min: Lowest ID to allocate.*@max: Highest ID to allocate.*@gfp: Memory allocation flags.* Allocate an ID between @min and @max, inclusive. The allocated ID will |
ida_free | da_free() - Release an allocated ID.*@ida: IDA handle.*@id: Previously allocated ID.* Context: Any context. |
ida_destroy | da_destroy() - Free all IDs.*@ida: IDA handle.* Calling this function frees all IDs and releases all resources used* by an IDA. When this call returns, the IDA is empty and can be reused* or freed. If the IDA is already empty, there is no need to call this |
__check_store_iter | |
page_cache_delete_batch | page_cache_delete_batch - delete several pages from page cache*@mapping: the mapping to which pages belong*@pvec: pagevec with pages to delete* The function walks over mapping->i_pages and removes pages passed in @pvec* from the mapping |
filemap_range_has_page | lemap_range_has_page - check if a page exists in range |
__add_to_page_cache_locked | |
page_cache_next_miss | page_cache_next_miss() - Find the next gap in the page cache.*@mapping: Mapping.*@index: Index.*@max_scan: Maximum range to search.* Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the* gap with the lowest index. |
page_cache_prev_miss | page_cache_prev_miss() - Find the previous gap in the page cache.*@mapping: Mapping.*@index: Index.*@max_scan: Maximum range to search.* Search the range [max(index - max_scan + 1, 0), index] for the* gap with the highest index. |
find_get_entry | d_get_entry - find and get a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset |
find_lock_entry | d_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased |
pagecache_get_page | pagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset. |
find_get_entries | d_get_entries - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page cache index*@nr_entries: The maximum number of entries*@entries: Where the resulting entries are placed*@indices: The cache indices corresponding to the |
find_get_pages_range | d_get_pages_range - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page index*@end: The final page index (inclusive)*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* |
find_get_pages_contig | d_get_pages_contig - gang contiguous pagecache lookup*@mapping: The address_space to search*@index: The starting page index*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* find_get_pages_contig() works exactly like |
find_get_pages_range_tag | d_get_pages_range_tag - find and return pages in given range matching @tag*@mapping: the address_space to search*@index: the starting page index*@end: The final page index (inclusive)*@tag: the tag index*@nr_pages: the maximum number of pages*@pages: |
filemap_map_pages | |
__do_page_cache_readahead | __do_page_cache_readahead() actually reads a chunk of disk. It allocates* the pages first, then submits them for I/O. This avoids the very bad* behaviour which would occur if page allocations are causing VM writeback. |
pagevec_remove_exceptionals | pagevec_remove_exceptionals - pagevec exceptionals pruning*@pvec: The pagevec to prune* pagevec_lookup_entries() fills both pages and exceptional radix* tree entries into the pagevec |
truncate_exceptional_pvec_entries | Unconditionally remove exceptional entries. Usually called from truncate* path. Note that the pagevec may be altered by this function by removing* exceptional entries similar to what pagevec_remove_exceptionals does. |
truncate_inode_pages_range | runcate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that |
invalidate_mapping_pages | validate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only |
invalidate_inode_pages2_range | validate_inode_pages2_range - remove range of pages from an address_space*@mapping: the address_space*@start: the page offset 'from' which to invalidate*@end: the page offset 'to' which to invalidate (inclusive)* Any pages which are found to be mapped |
memfd_tag_pins | |
memfd_wait_for_pins | Setting SEAL_WRITE requires us to verify there's no pending writer. However,* via get_user_pages(), drivers might have some pending I/O without any active* user-space mappings (eg., direct-IO, AIO). Therefore, we look at all pages |
get_unlocked_entry | Look up entry in page cache, wait for it to become unlocked if it* is a DAX entry and return it. The caller must subsequently call* put_unlocked_entry() if it did not lock the entry or dax_unlock_entry()* if it did |
grab_mapping_entry | Find page cache entry at given index. If it is a DAX entry, return it* with the entry locked. If the page cache doesn't contain an entry at* that index, add a locked empty entry.* When requesting an entry with size DAX_PMD, grab_mapping_entry() will |
dax_layout_busy_page | dax_layout_busy_page - find first pinned page in @mapping*@mapping: address space to scan for a page with ref count > 1* DAX requires ZONE_DEVICE mapped pages. These pages are never* 'onlined' to the page allocator so they are considered idle when |
__dax_invalidate_entry | |
dax_writeback_one |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |