函数逻辑报告 |
Source Code:mm\swap_state.c |
Create Date:2022-07-27 16:44:55 |
Last Modify:2020-03-17 22:02:06 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page.
函数原型:void put_page(struct page *page)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
struct page * | page |
280 | 如果非is_huge_zero_page(page)则Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page. |
名称 | 描述 |
---|---|
hib_end_io | |
get_futex_key | get_futex_key() - Get parameters which are the keys for a futex*@uaddr: virtual address of the futex*@fshared: 0 for a PROCESS_PRIVATE futex, 1 for PROCESS_SHARED*@key: address where result is stored |
stack_map_get_build_id | Parse build ID of ELF file mapped to vma |
perf_virt_to_phys | |
__replace_page | __replace_page - replace page in vma by new page |
__update_ref_ctr | |
uprobe_write_opcode | NOTE:* Expect the breakpoint instruction to be the smallest size instruction for* the architecture |
__copy_insn | |
uprobe_clear_state | probe_clear_state - Free the area allocated for slots. |
is_trap_at_addr | |
mount_block_root | |
replace_page_cache_page | place_page_cache_page - replace a pagecache page with a new one*@old: page to be replaced*@new: page to replace with*@gfp_mask: allocation mode* This function replaces a page in the pagecache with a new one |
__add_to_page_cache_locked | |
wait_on_page_bit_common | |
find_get_entry | d_get_entry - find and get a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset |
find_lock_entry | d_lock_entry - locate, pin and lock a page cache entry*@mapping: the address_space to search*@offset: the page cache index* Looks up the page cache slot at @mapping & @offset. If there is a* page cache page, it is returned locked and with an increased |
pagecache_get_page | pagecache_get_page - find and get a page reference*@mapping: the address_space to search*@offset: the page index*@fgp_flags: PCG flags*@gfp_mask: gfp mask to use for the page cache data page allocation* Looks up the page cache slot at @mapping & @offset. |
find_get_entries | d_get_entries - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page cache index*@nr_entries: The maximum number of entries*@entries: Where the resulting entries are placed*@indices: The cache indices corresponding to the |
find_get_pages_range | d_get_pages_range - gang pagecache lookup*@mapping: The address_space to search*@start: The starting page index*@end: The final page index (inclusive)*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* |
find_get_pages_contig | d_get_pages_contig - gang contiguous pagecache lookup*@mapping: The address_space to search*@index: The starting page index*@nr_pages: The maximum number of pages*@pages: Where the resulting pages are placed* find_get_pages_contig() works exactly like |
find_get_pages_range_tag | d_get_pages_range_tag - find and return pages in given range matching @tag*@mapping: the address_space to search*@index: the starting page index*@end: The final page index (inclusive)*@tag: the tag index*@nr_pages: the maximum number of pages*@pages: |
generic_file_buffered_read | generic_file_buffered_read - generic file read routine*@iocb: the iocb to read*@iter: data destination*@written: already copied* This is a generic file read routine, and uses the* mapping->a_ops->readpage() function for the actual low-level stuff. |
filemap_fault | lemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault |
filemap_map_pages | |
wait_on_page_read | |
do_read_cache_page | |
write_one_page | write_one_page - write out a single page and wait on I/O*@page: the page to write* The page must be locked by the caller and will be unlocked upon return |
read_cache_pages_invalidate_page | see if a page needs releasing upon read_cache_pages() failure* - the caller of read_cache_pages() may have set PG_private or PG_fscache* before calling, such as the NFS fs marking pages that are cached locally* on disk, thus we need to give the fs a |
read_cache_pages | ad_cache_pages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These* pages have their ->index populated and are otherwise uninitialised. |
read_pages | |
put_pages_list | put_pages_list() - release a list of pages*@pages: list of pages threaded on page->lru* Release a list of pages which are strung together on page.lru. Currently* used by read_cache_pages() and related error recovery code. |
truncate_inode_pages_range | runcate_inode_pages_range - truncate range of pages specified by start & end byte offsets*@mapping: mapping to truncate*@lstart: offset from which to truncate*@lend: offset to which to truncate (inclusive)* Truncate the page cache, removing the pages that |
invalidate_mapping_pages | validate_mapping_pages - Invalidate all the unlocked pages of one inode*@mapping: the address_space which holds the pages to invalidate*@start: the offset 'from' which to invalidate*@end: the offset 'to' which to invalidate (inclusive)* This function only |
invalidate_complete_page2 | This is like invalidate_complete_page(), except it ignores the page's* refcount |
pagecache_isize_extended | pagecache_isize_extended - update pagecache after extension of i_size*@inode: inode for which i_size was extended*@from: original inode size*@to: new inode size* Handle extension of inode size either caused by extending truncate or by |
putback_lru_page | putback_lru_page - put previously isolated page onto appropriate LRU list*@page: page to be put back to appropriate lru list* Add previously isolated @page to appropriate LRU list.* Page may still be unevictable for other reasons. |
follow_page_pte | |
follow_pmd_mask | |
check_and_migrate_cma_pages | |
__gup_longterm_locked | __gup_longterm_locked() is a wrapper for __get_user_pages_locked which* allows us to process the FOLL_LONGTERM flag. |
free_page_series | a contiguous series of pages |
zap_pte_range | |
wp_page_copy | Handle the case of a page which we actually need to copy to a new page.* Called with mmap_sem locked and the old page referenced, but* without the ptl held.* High level logic flow:* - Allocate a page, copy the content of the old page to the new one. |
wp_page_shared | |
do_wp_page | This routine handles present pages, when users try to write* to a shared page. It is done by copying the page to a new address* and decrementing the shared-page counter for the old page.* Note that this routine assumes that the protection checks have been |
do_swap_page | We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases |
do_anonymous_page | We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with mmap_sem still held, but pte unmapped and unlocked. |
__do_fault | The mmap_sem must have been held on entry, and may have been* released depending on flags and vma->vm_ops->fault() return value.* See filemap_fault() and __lock_page_retry(). |
do_read_fault | |
do_cow_fault | |
do_shared_fault | |
do_numa_page | |
__access_remote_vm | Access another process' address space as given in mm. If non-NULL, use the* given task for page fault accounting. |
mincore_page | Later we can get more picky about what "in core" means precisely.* For now, simply check to see if the page is in the page cache,* and is up to date; i.e. that no page-in operation would be required |
__munlock_pagevec | Munlock a batch of pages from the same zone* The work is split to two main phases |
munlock_vma_pages_range | munlock_vma_pages_range() - munlock all pages in the vma range.'*@vma - vma containing range to be munlock()ed.*@start - start address in @vma of the range*@end - end of range in @vma.* For mremap(), munmap() and exit().* Called with @vma VM_LOCKED. |
try_to_unmap_one | @arg: enum ttu_flags will be passed to this argument |
process_vm_rw_single_vec | process_vm_rw_single_vec - read/write pages from task specified*@addr: start memory address of target process*@len: size of area to copy to/from*@iter: where to copy to/from locally*@process_pages: struct pages area that can store at least* |
madvise_cold_or_pageout_pte_range | |
madvise_free_pte_range | |
madvise_inject_error | Error injection support for memory error handling. |
put_page | Perform a free_page(), also freeing any swap cache associated with* this page if it is the last user of the page. |
__read_swap_cache_async | |
swap_cluster_readahead | swap_cluster_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin. |
swap_vma_readahead | swap_vma_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin.* Primitive swap readahead code |
__try_to_reclaim_swap | rns 1 if swap entry is freed |
unuse_pte | No need to decide whether this PTE shares the swap entry with others,* just let do_wp_page work it out if a write is requested later - to* force COW, vm_page_prot omits write permission from any private vma. |
unuse_pte_range | |
try_to_unuse | If the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false |
SYSCALL_DEFINE2 | |
zswap_writeback_entry | |
alloc_pool_huge_page | Allocates a fresh page to the hugetlb allocator pool in the node interleaved* manner. |
alloc_surplus_huge_page | Allocates a fresh surplus page from the page allocator. |
gather_surplus_pages | Increase the hugetlb pool such that it can accommodate a reservation* of size 'delta'. |
gather_bootmem_prealloc | Put bootmem huge pages into the standard lists after mem_map is up |
hugetlb_cow | Hugetlb_cow() should be called with page lock of the original hugepage held.* Called with hugetlb_instantiation_mutex held and pte_page locked so we* cannot race with other handlers or page migration. |
hugetlbfs_pagecache_present | Return whether there is a pagecache page to back given address within VMA.* Caller follow_hugetlb_page() holds page_table_lock so we cannot lock_page. |
hugetlb_no_page | |
hugetlb_fault | |
hugetlb_mcopy_atomic_pte | Used by userfaultfd UFFDIO_COPY. Based on mcopy_atomic_pte with* modifications for huge pages. |
putback_active_hugepage | |
lookup_node | |
break_ksm | We use break_ksm to break COW on a ksm page: it's a stripped down* if (get_user_pages(addr, 1, 1, 1, &page, NULL) == 1)* put_page(page);* but taking great care only to touch a ksm page, in a VM_MERGEABLE vma, |
get_mergeable_page | |
get_ksm_page | get_ksm_page: checks if the page indicated by the stable node* is still its ksm page, despite having held no reference to it.* In which case we can trust the content of the page, and it* returns the gotten page; but if the page has now been zapped, |
remove_rmap_item_from_tree | Removing rmap_item from stable or unstable tree.* This function will clean the information from the stable/unstable tree. |
replace_page | place_page - replace page in vma by new ksm page*@vma: vma that holds the pte pointing to page*@page: the page we are replacing by kpage*@kpage: the ksm page we replace page by*@orig_pte: the original value of the pte |
stable_node_dup | |
stable_tree_search | stable_tree_search - search for page inside the stable tree* This function checks if there is a page inside the stable tree* with identical content to the page that we are scanning right now |
stable_tree_insert | stable_tree_insert - insert stable tree node pointing to new ksm page* into the stable tree.* This function returns the stable tree node just allocated on success,* NULL otherwise. |
unstable_tree_search_insert | stable_tree_search_insert - search for identical page,* else insert rmap_item into the unstable tree.* This function searches for a page in the unstable tree identical to the* page currently being scanned; and if no identical page is found in the |
cmp_and_merge_page | mp_and_merge_page - first see if page can be merged into the stable tree;* if not, compare checksum to previous and if it's the same, see if page can* be inserted into the unstable tree, or merged with a page already there and |
scan_get_next_rmap_item | |
ksm_do_scan | ksm_do_scan - the ksm scanner main worker function.*@scan_npages: number of pages we want to scan before we return. |
isolate_movable_page | |
putback_movable_pages | Put previously isolated pages back onto the appropriate lists* from where they were once taken off for compaction/migration.* This function shall be used whenever the isolated pageset has been* built from lru, balloon, hugetlbfs page |
__buffer_migrate_page | |
__unmap_and_move | |
unmap_and_move | Obtain the lock on page, remove all ptes and migrate the page* to the newly allocated page in newpage. |
add_page_for_migration | Resolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist |
__do_huge_pmd_anonymous_page | |
do_huge_pmd_wp_page_fallback | |
do_huge_pmd_wp_page | |
do_huge_pmd_numa_page | NUMA hinting page fault entry point for trans huge pmds |
madvise_free_huge_pmd | Return true if we do MADV_FREE successfully on entire pmd page.* Otherwise, return false. |
__split_huge_pmd_locked | |
__split_huge_page | |
deferred_split_scan | |
khugepaged_prealloc_page | |
khugepaged_do_scan | |
get_mctgt_type | get_mctgt_type - get target type of moving charge*@vma: the vma the pte to be checked belongs*@addr: the address corresponding to the pte to be checked*@ptent: the pte to be checked*@target: the pointer the target page or swap ent will be stored(can be |
mem_cgroup_move_charge_pte_range | |
__gup_benchmark_ioctl | |
delete_from_lru_cache | XXX: It is possible that a page is isolated from LRU cache,* and then kept in swap cache or failed to remove from page cache.* The page count will stop it from being freed by unpoison.* Stress tests should be aware of this memory leak problem. |
me_huge_page | Huge pages. Needs work.* Issues:* - Error on hugepage is contained in hugepage unit (not in raw page unit.)* To narrow down kill region to one page, we need to break up pmd. |
get_hwpoison_page | get_hwpoison_page() - Get refcount for memory error handling:*@page: raw error page (hit by memory error)* Return: return 0 if failed to grab the refcount, otherwise true (some* non-zero value.) |
__free_zspage | |
z3fold_page_migrate | |
mcopy_atomic_pte | |
__mcopy_atomic_hugetlb | __mcopy_atomic processing for HUGETLB vmas. Note that this routine is* called with mmap_sem held, it will release mmap_sem before returning. |
__mcopy_atomic | |
page_idle_get_page | Idle page tracking only considers user memory pages, for other types of* pages the idle flag is always unset and an attempt to set it is silently* ignored |
page_idle_bitmap_read | |
page_idle_bitmap_write | |
put_vaddr_frames | put_vaddr_frames() - drop references to pages if get_vaddr_frames() acquired* them*@vec: frame vector to put* Drop references to pages if get_vaddr_frames() acquired them. We also* invalidate the frame vector so that it is prepared for the next call into |
__bio_iov_iter_get_pages | __bio_iov_iter_get_pages - pin user or kernel pages and add them to a bio*@bio: bio to add pages to*@iter: iov iterator describing the region to be mapped* Pins pages from *iter and appends them to @bio's bvec array. The |
bio_map_user_iov | _map_user_iov - map user iovec into bio*@q: the struct request_queue for the bio*@iter: iovec iterator*@gfp_mask: memory allocation flags* Map the user space address into a bio suitable for io to a block* device. Returns an error pointer in case of error. |
read_dev_sector | |
vfs_dedupe_file_range_compare | Compare extents of two files to see if they are the same.* Caller must have locked both inodes to prevent write races. |
put_arg_page | |
anon_pipe_buf_release | |
generic_pipe_buf_release | generic_pipe_buf_release - put a reference to a &struct pipe_buffer*@pipe: the pipe that the buffer belongs to*@buf: the buffer to put a reference to* Description:* This function releases a reference to @buf. |
page_get_link | get the link contents into pagecache |
page_put_link | |
simple_write_end | simple_write_end - .write_end helper for non-block-device FSes*@file: See .write_end of address_space_operations*@mapping: "*@pos: "*@len: "*@copied: "*@page: "*@fsdata: "* simple_write_end does the minimum needed for updating a page after writing is |
page_cache_pipe_buf_release | |
default_file_splice_read | |
iter_to_pipe | |
__clear_page_buffers | |
__find_get_block_slow | Various filesystems appear to want __find_get_block to be non-blocking |
grow_dev_page | Create the page-cache page that contains the requested block.* This is used purely for blockdev mappings. |
block_write_begin | lock_write_begin takes care of the basic task of block allocation and* bringing partial write blocks uptodate first.* The filesystem needs to handle block truncation upon failure. |
generic_write_end | |
nobh_write_begin | On entry, the page is fully not uptodate.* On exit the page is fully uptodate in the areas outside (from,to)* The filesystem needs to handle block truncation upon failure. |
nobh_write_end | |
nobh_truncate_page | |
block_truncate_page | |
blkdev_write_end | |
dio_cleanup | Release any resources in case of a failure |
submit_page_section | An autonomous function to put a chunk of a page under deferred IO.* The caller doesn't actually know (or care) whether this piece of page is in* a BIO, or is under IO or whatever. We just take care of all possible * situations here |
do_direct_IO | Walk the user pages, and the file, mapping blocks to disk and generating* a sequence of (page,offset,len,block) mappings. These mappings are injected* into submit_page_section(), which takes care of the next stage of submission |
do_blockdev_direct_IO | This is a library function for use by filesystem drivers |
mpage_readpages | mpage_readpages - populate an address space with some pages & start reads against them*@mapping: the address_space*@pages: The address of a list_head which contains the target pages. These |
aio_free_ring | |
aio_migratepage | |
build_merkle_tree_level | |
verify_page | Verify a single data page against the file's Merkle tree |
iomap_page_release | |
iomap_next_page | |
iomap_readpages_actor | |
iomap_readpages | |
iomap_migrate_page | |
iomap_write_begin | |
iomap_write_end | |
ramfs_nommu_expand_for_mapping | add a contiguous set of pages into a ramfs inode when it's truncated from* size 0 on the assumption that it's going to be used for an mmap of shared* memory |
ramfs_nommu_get_unmapped_area | |
put_user_page | put_user_page() - release a gup-pinned page*@page: pointer to page to be released* Pages that were pinned via get_user_pages*() must be released via* either put_user_page(), or one of the put_user_pages*() routines* below |
put_dev_sector | |
__skb_frag_unref | 在一个页面片段释放参考 |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |