Function report |
Source Code:mm\huge_memory.c |
Create Date:2022-07-28 16:01:52 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:follow_trans_huge_pmd
Proto:struct page *follow_trans_huge_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd, unsigned int flags)
Type:struct page
Parameter:
Type | Parameter | Name |
---|---|---|
struct vm_area_struct * | vma | |
unsigned long | addr | |
pmd_t * | pmd | |
unsigned int | flags |
1474 | assert_spin_locked(pmd_lockptr(mm, pmd)) |
1476 | If flags & check pte is writable && Not FOLL_FORCE can write to even unwritable pmd's, but only* after we've gone through a COW cycle and they are dirty. Then Go to out |
1480 | If flags & give error on hole if it would be zero && is_huge_zero_pmd( * pmd) Then Return ERR_PTR( - EFAULT) |
1484 | If flags & rce NUMA hinting page fault && pmd_protnone( * pmd) Then Go to out |
1487 | page = Currently stuck as a macro due to indirect forward reference to* linux/mmzone.h's __section_mem_map_addr() definition:( * pmd) |
1488 | VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page) |
1491 | If flags & lock present pages && Flags, see mm.h. & VM_LOCKED Then |
1513 | If PageAnon(page) && compound_mapcount(page) != 1 Then Go to skip_mlock |
1517 | If Not Return true if the page was successfully locked Then Go to skip_mlock |
1519 | lru_add_drain() |
1524 | skip_mlock : |
1525 | page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT determines the page size |
1526 | VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page) |
1527 | If flags & do get_page on page Then get_page(page) |
1530 | out : |
1531 | Return page |
Name | Describe |
---|---|
follow_pmd_mask |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |