函数逻辑报告 |
Source Code:include\linux\mm.h |
Create Date:2022-07-27 06:44:42 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:page_to_nid
函数原型:static inline int page_to_nid(const struct page *page)
返回类型:int
参数:
类型 | 参数 | 名称 |
---|---|---|
const struct page * | page |
1087 | 返回(({ |
1087 | PagePoisoned(p)的长度 |
1087 | })->体系结构无关页的属性 >> NODES_PGSHIFT) & NODES_MASK |
名称 | 描述 |
---|---|
reclaim_pages | |
list_lru_add | |
list_lru_del | |
new_non_cma_page | |
do_numa_page | |
change_pte_range | |
show_numa_info | |
move_freepages | Move the free pages in a range to the free lists of the requested type.* Note that start_page and end_pages are not aligned on a pageblock* boundary. If alignment is required, use move_freepages_block() |
enqueue_huge_page | |
update_and_free_page | |
__free_huge_page | |
alloc_fresh_huge_page | Common helper to allocate a fresh hugetlb page. All specific allocators* should use this function to get new hugetlb pages |
dissolve_free_huge_page | Dissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use |
alloc_surplus_huge_page | Allocates a fresh surplus page from the page allocator. |
gather_bootmem_prealloc | Put bootmem huge pages into the standard lists after mem_map is up |
move_hugetlb_state | |
queue_pages_required | Check if the page's nid is in qp->nmask.* If MPOL_MF_INVERT is set in qp->flags, check if the nid is* in the invert of qp->nmask. |
lookup_node | |
alloc_page_interleave | Allocate a page in interleaved policy.Own path because it needs to do special accounting. |
mpol_misplaced | mpol_misplaced - check whether current page node is valid in policy*@page: page to be checked*@vma: vm area where page mapped*@addr: virtual address where page mapped* Lookup current policy node id for vma,addr and "compare to" page's* node id |
slob_alloc | slob_alloc: entry point into the slob allocator. |
unstable_tree_search_insert | stable_tree_search_insert - search for identical page,* else insert rmap_item into the unstable tree.* This function searches for a page in the unstable tree identical to the* page currently being scanned; and if no identical page is found in the |
cache_free_pfmemalloc | |
cache_free_alien | |
cache_grow_begin | Grow (by 1) the number of slabs within a cache. This is called by* kmem_cache_alloc() when there are no active objs left in a cache. |
cache_grow_end | |
fallback_alloc | Fallback function if there was no memory available and no objects on a* certain node and fall back is permitted. First we scan all the* available node for available objects. If that fails then we* perform an allocation without specifying a node |
allocate_slab | |
discard_slab | |
deactivate_slab | Remove the cpu slab |
node_match | Check if the objects in a per cpu structure fit numa* locality expectations. |
__slab_free | Slow path handling. This may still be called frequently since objects* have a longer lifetime than the cpu slabs in most processing loads.* So we still attempt to reduce cache line usage. Just take the slab* lock and free the item |
early_kmem_cache_node_alloc | No kmalloc_node yet so do it by hand. We know that this is the first* slab on the node for this slabcache. There are no concurrent accesses* possible.* Note that this function only works on the kmem_cache_node* when allocating for the kmem_cache_node |
add_page_for_migration | Resolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist |
do_pages_stat_array | Determine the nodes of an array of pages and store it in an array of status. |
get_deferred_split_queue | |
do_huge_pmd_wp_page_fallback | |
do_huge_pmd_numa_page | NUMA hinting page fault entry point for trans huge pmds |
split_huge_page_to_list | This function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked. |
deferred_split_huge_page | |
khugepaged_scan_pmd | |
mem_cgroup_page_nodeinfo | |
soft_limit_tree_from_page | |
mem_cgroup_try_charge_delay | |
shake_page | When a unknown page type is encountered drain as many buffers as possible* in the hope to turn the page into a LRU or free page, which we can handle. |
new_page | |
kmemleak_scan | Scan data sections and all the referenced memory blocks allocated via the* kernel's standard allocators. This function must be called with the* scan_mutex held. |
lookup_page_ext | |
page_cpupid_xchg_last | |
page_cpupid_last | |
page_zone | |
page_pgdat |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |