Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\memory.c Create Date:2022-07-28 14:41:06
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:insert_pfn

Proto:static vm_fault_t insert_pfn(struct vm_area_struct *vma, unsigned long addr, pfn_t pfn, pgprot_t prot, bool mkwrite)

Type:vm_fault_t

Parameter:

TypeParameterName
struct vm_area_struct *vma
unsigned longaddr
pfn_tpfn
pgprot_tprot
boolmkwrite
1602  mm = The address space we belong to.
1606  pte = get_locked_pte(mm, addr, & ptl)
1607  If Not pte Then Return VM_FAULT_OOM
1609  If Not pte_none( * pte) Then
1610  If mkwrite Then
1621  If pte_pfn( * pte) != pfn_t_to_pfn(pfn) Then
1623  Go to out_unlock
1625  entry = pte_mkyoung( * pte)
1630  Go to out_unlock
1634  If pfn_t_devmap(pfn) Then entry = pte_mkdevmap(pfn_t_pte(pfn, prot))
1636  Else entry = pte_mkspecial(pfn_t_pte(pfn, prot))
1639  If mkwrite Then
1640  entry = pte_mkyoung(entry)
1641  entry = Do pte_mkwrite, but only if the vma says VM_WRITE. We do this when* servicing faults for write access. In the normal case, do always want* pte_mkwrite. But get_user_pages can cause write faults for mappings
1644  set_pte_at(mm, addr, pte, entry)
1645  The x86 doesn't have any external MMU info: the kernel page* tables contain all the necessary information.
1647  out_unlock :
1648  pte_unmap_unlock(pte, ptl)
1649  Return VM_FAULT_NOPAGE
Caller
NameDescribe
vmf_insert_pfn_protvmf_insert_pfn_prot - insert single pfn into user vma with specified pgprot*@vma: user vma to map to*@addr: target user address of this page*@pfn: source kernel pfn*@pgprot: pgprot flags for the inserted page* This is exactly like vmf_insert_pfn(), except
__vm_insert_mixed