Function report |
Source Code:include\linux\pagemap.h |
Create Date:2022-07-28 05:45:06 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:Return true if the page was successfully locked
Proto:static inline int trylock_page(struct page *page)
Type:int
Parameter:
Type | Parameter | Name |
---|---|---|
struct page * | page |
469 | page = compound_head(page) |
Name | Describe |
---|---|
pagecache_get_page | pagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset. |
generic_file_buffered_read | generic_file_buffered_read - generic file read routine*@iocb: the iocb to read*@iter: data destination*@written: already copied* This is a generic file read routine, and uses the* mapping->a_ops->readpage() function for the actual low-level stuff. |
lock_page_maybe_drop_mmap | lock_page_maybe_drop_mmap - lock the page, possibly dropping the mmap_sem*@vmf - the vm_fault for this fault.*@page - the page to lock.*@fpin - the pointer to the file we may pin (or is already pinned). |
filemap_map_pages | |
read_cache_pages_invalidate_page | see if a page needs releasing upon read_cache_pages() failure* - the caller of read_cache_pages() may have set PG_private or PG_fscache* before calling, such as the NFS fs marking pages that are cached locally* on disk, thus we need to give the fs a |
truncate_inode_pages_range | runcate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that |
invalidate_mapping_pages | validate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
__isolate_lru_page | Attempt to remove the specified page from its LRU. Only take this page* if it is of the appropriate PageActive status. Pages which are being* freed elsewhere are also ignored.* returns 0 on success, -ve errno on failure. |
shrink_active_list | |
follow_page_pte | |
do_wp_page | This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been |
page_referenced | page_referenced - test if the page was referenced*@page: the page to test*@is_locked: caller holds lock on the page*@memcg: target memory cgroup*@vm_flags: collect encountered vma->vm_flags who actually referenced the page* Quick test_and_clear_referenced |
madvise_cold_or_pageout_pte_range | |
madvise_free_pte_range | |
swap_readpage | |
free_swap_cache | If we are the only user, then try to free up the swap cache. * Its ok to check for PageSwapCache without the page lock* here because we are going to recheck again inside* try_to_free_swap() _with_ the lock.* - Marcelo |
__try_to_reclaim_swap | rns 1 if swap entry is freed |
hugetlb_fault | |
get_ksm_page | get_ksm_page: checks if the page indicated by the stable node* is still its ksm page, despite having held no reference to it.* In which case we can trust the content of the page, and it* returns the gotten page; but if the page has now been zapped, |
try_to_merge_one_page | ry_to_merge_one_page - take two pages and merge them into one*@vma: the vma that holds the pte pointing to page*@page: the PageAnon page that we want to replace with kpage*@kpage: the PageKsm page that we want to map instead of page, |
cmp_and_merge_page | mp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and |
isolate_movable_page | |
__unmap_and_move | |
unmap_and_move_huge_page | Counterpart of unmap_and_move_page() for hugepage migration |
do_huge_pmd_wp_page | |
follow_trans_huge_pmd | |
do_huge_pmd_numa_page | NUMA hinting page fault entry point for trans huge pmds |
madvise_free_huge_pmd | Return true if we do MADV_FREE successfully on entire pmd page.* Otherwise, return false. |
deferred_split_scan | |
__collapse_huge_page_isolate | |
mem_cgroup_move_account | mem_cgroup_move_account - move account of the page*@page: the page*@compound: charge the page as compound or small page*@from: mem_cgroup which the page is moved from.*@to: mem_cgroup which the page is moved to. @from != @to. |
trylock_zspage | |
z3fold_alloc | z3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt |
balloon_page_enqueue_one | |
balloon_page_list_dequeue | alloon_page_list_dequeue() - removes pages from balloon's page list and* returns a list of the pages.*@b_dev_info: balloon device decriptor where we will grab a page from.*@pages: pointer to the list of pages that would be returned to the caller. |
page_idle_clear_pte_refs |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |