Function report |
Source Code:include\linux\vmstat.h |
Create Date:2022-07-28 05:43:33 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:Disable counters
Proto:static inline void count_vm_event(enum vm_event_item item)
Type:void
Parameter:
Type | Parameter | Name |
---|---|---|
enum vm_event_item | item |
NULL
Name | Describe |
---|---|
filemap_fault | lemap_fault - read in file data for page fault handling*@vmf: struct vm_fault containing details of the fault* filemap_fault() is invoked via the vma operations vector for a* mapped memory region to read in file data during a page fault |
__oom_kill_process | |
lru_cache_add_active_or_unevictable | lru_cache_add_active_or_unevictable*@page: the page to be added to LRU*@vma: vma in which page is mapped for determining reclaimability* Place @page on the active or unevictable LRU list, depending on its* evictability |
__pagevec_lru_add_fn | |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
throttle_direct_reclaim | Throttle direct reclaimers if backing storage is backed by the network* and the PFMEMALLOC reserve for the preferred node is getting dangerously* depleted. kswapd will continue to make progress and wake the processes* when the low watermark is reached. |
balance_pgdat | For kswapd, balance_pgdat() will reclaim pages across a node from zones* that are eligible for use by the caller until at least one zone is* balanced.* Returns the order kswapd finished reclaiming at. |
kswapd_try_to_sleep | |
node_reclaim | |
do_swap_page | We enter with non-exclusive mmap_sem (to exclude vma changes,* but allow concurrent faults), and pte mapped but not yet locked.* We return with pte unmapped and unlocked.* We return with the mmap_sem locked or unlocked in the same cases |
handle_mm_fault | By the time we get here, we already hold the mm semaphore* The mmap_sem may have been released depending on flags and our* return value. See filemap_fault() and __lock_page_or_retry(). |
clear_page_mlock | LRU accounting for clear_page_mlock() |
mlock_vma_page | Mark page as mlocked if not already.* If page on LRU, isolate and putback to move to unevictable list. |
__munlock_isolated_page | Finish munlock after successful page isolation* Page must be locked. This is a wrapper for try_to_munlock()* and putback_lru_page() with munlock accounting. |
count_swpout_vm_event | |
__swap_writepage | |
swap_readpage | |
lookup_swap_cache | Lookup a swap entry in the swap cache. A found page will be returned* unlocked and with its refcount incremented - we rely on the kernel* lock getting page table operations atomic even if we drop the page* lock before returning. |
swap_cluster_readahead | swap_cluster_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin. |
swap_vma_readahead | swap_vma_readahead - swap in pages in hope we need them soon*@entry: swap entry of this memory*@gfp_mask: memory allocation flags*@vmf: fault information* Returns the struct page for entry and addr, after queueing swapin.* Primitive swap readahead code |
get_huge_zero_page | |
__do_huge_pmd_anonymous_page | |
do_huge_pmd_anonymous_page | |
do_huge_pmd_wp_page | |
__split_huge_pmd_locked | |
split_huge_page_to_list | This function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked. |
deferred_split_huge_page | |
khugepaged_alloc_page | |
dax_iomap_pte_fault | |
drop_caches_sysctl_handler |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |