调用者名称 | 描述 |
page_endio | After completing I/O on a page, call this routine to update the page* flags appropriately |
find_lock_entry | d_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased |
__set_page_dirty_nobuffers | For address_spaces which do not use buffers. Just tag the page as dirty in* the xarray.* This is also used when a single buffer is being dirtied: we want to set the* page dirty in that case, but not all the buffers. This is a "bottom-up" |
set_page_dirty | Dirty a page |
__cancel_dirty_page | This cancels just the dirty bit on the kernel page itself, it does NOT* actually remove dirty bits on any mmap's that may be around |
clear_page_dirty_for_io | Clear a page's dirty flag, while caring for dirty memory accounting.* Returns true if the page was previously dirty.* This is for preparing to put the page under writeout. We leave the page* tagged as dirty in the xarray so that a concurrent write-for-sync |
test_clear_page_writeback | |
__test_set_page_writeback | |
wait_on_page_writeback | Wait for a page to complete writeback |
invalidate_inode_page | Safely invalidate one page from its pagecache mapping.* It only drops clean, unused pages. The page must be locked.* Returns 1 if the page is successfully invalidated, otherwise 0. |
handle_write_error | We detected a synchronous write error writing a page out. Probably* -ENOSPC. We need to propagate that into the address_space for a subsequent* fsync(), msync() or close().* The tricky part is that after writepage we cannot touch the mapping: nothing |
__remove_mapping | Same as remove_mapping, but if the page is removed from the mapping, it* gets returned with a refcount of 0. |
page_check_dirty_writeback | Check if a page is dirty or under writeback |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
__isolate_lru_page | Attempt to remove the specified page from its LRU. Only take this page* if it is of the appropriate PageActive status. Pages which are being* freed elsewhere are also ignored.* returns 0 on success, -ve errno on failure. |
page_evictable | page_evictable - test whether a page is evictable*@page: the page to test* Test whether page is evictable--i |
page_mapping_file | For file cache pages, return the address_space, otherwise return NULL |
isolate_migratepages_block | solate_migratepages_block() - isolate all migrate-able pages within* a single pageblock*@cc: Compaction control structure.*@low_pfn: The first PFN to isolate*@end_pfn: The one-past-the-last PFN to isolate, within same pageblock |
__dump_page | |
page_mkclean | |
page_add_file_rmap | page_add_file_rmap - add pte mapping to a file page*@page: the page to add the mapping to*@compound: charge the page as compound or small page* The caller needs to hold the pte lock. |
rmap_walk_file | map_walk_file - do something to file page using the object-based rmap method*@page: the page to be handled*@rwc: control variable according to each walk type* Find all the mappings of a page using the mapping pointer and the vma chains |
isolate_movable_page | |
putback_movable_page | It should be called on page which is PG_movable |
move_to_new_page | Move a page to a newly allocated page* The page is locked and all ptes have been successfully removed.* The new page will have replaced the old page if this function* is successful.* Return value:* < 0 - error code* MIGRATEPAGE_SUCCESS - success |
unmap_and_move_huge_page | Counterpart of unmap_and_move_page() for hugepage migration |
mem_cgroup_move_account | mem_cgroup_move_account - move account of the page*@page: the page*@compound: charge the page as compound or small page*@from: mem_cgroup which the page is moved from.*@to: mem_cgroup which the page is moved to. @from != @to. |
hwpoison_filter_dev | |
me_pagecache_clean | Clean (or cleaned) page cache page. |
me_pagecache_dirty | Dirty pagecache page* Issues: when the error hit a hole page the error is not properly* propagated. |
me_huge_page | Huge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd. |
hwpoison_user_mappings | Do all that is necessary to remove user space mappings. Unmap* the pages and send SIGBUS to the processes if the data was dirty. |
unpoison_memory | poison_memory - Unpoison a previously poisoned page*@pfn: Page number of the to be unpoisoned page* Software-unpoison a page that has been poisoned by* memory_failure() earlier |
z3fold_page_migrate | |
__set_page_dirty_buffers | Add a page to the dirty page list.* It is a sad fact of life that this function is called from several places* deeply under spinlocking. It may not sleep.* If the page has buffers, the uptodate buffers are set dirty, to preserve |
mark_buffer_dirty | mark_buffer_dirty - mark a buffer_head as needing writeout*@bh: the buffer_head to mark dirty* mark_buffer_dirty() will set the dirty bit against the buffer, then set* its backing page dirty, then tag the page as dirty in the page cache* and then attach |
iomap_set_page_dirty | |
delete_from_page_cache | delete_from_page_cache - delete page from page cache*@page: the page which the kernel is trying to remove from page cache* This must be called only on pages that have been verified to be in the page* cache and locked |
page_cache_pipe_buf_steal | Attempt to steal a page from a pipe buffer. This should perhaps go into* a vm helper function, it's already simplified quite a bit by the* addition of remove_mapping(). If success is returned, the caller may |