Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\pagewalk.c Create Date:2022-07-28 14:55:02
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:walk_page_mapping - walk all memory areas mapped into a struct address_space

Proto:int walk_page_mapping(struct address_space *mapping, unsigned long first_index, unsigned long nr, const struct mm_walk_ops *ops, void *private)

Type:int

Parameter:

TypeParameterName
struct address_space *mapping
unsigned longfirst_index
unsigned longnr
const struct mm_walk_ops *ops
void *private
430  struct mm_walk walk = {ops = ops, private = private, }
437  err = 0
439  lockdep_assert_held( & i_mmap_rwsem)
443  vba = Offset (within vm_file) in PAGE_SIZEunits
444  vea = vba + vma_pages(vma)
445  cba = first_index
446  cba = max - return maximum of two values of the same or compatible types*@x: first value*@y: second value(cba, vba)
447  cea = first_index + nr
448  cea = min - return minimum of two values of the same or compatible types*@x: first value*@y: second value(cea, vea)
450  start_addr = ( cba - vba << PAGE_SHIFT determines the page size ) + Our start address within vm_mm.
451  end_addr = ( cea - vba << PAGE_SHIFT determines the page size ) + Our start address within vm_mm.
452  If start_addr >= end_addr Then Continue
455  vma = vma
456  mm = The address space we belong to.
458  err = Decide whether we really walk over the current vma on [@start, @end)* or skip it via the returned value. Return 0 if we do walk over the* current vma, and return 1 if we skip the vma. Negative values means* error, where we abort the current walk.
459  If err > 0 Then
460  err = 0
461  Break
462  Else if err < 0 Then Break
465  err = __walk_page_range(start_addr, end_addr, & walk)
466  If err Then Break
470  Return err
Caller
NameDescribe
wp_shared_mapping_rangewp_shared_mapping_range - Write-protect all ptes in an address space range*@mapping: The address_space we want to write protect*@first_index: The first page offset in the range*@nr: Number of incremental page offsets to cover* Note: This function
clean_record_shared_mapping_rangelean_record_shared_mapping_range - Clean and record all ptes in an* address space range*@mapping: The address_space we want to clean*@first_index: The first page offset in the range*@nr: Number of incremental page offsets to cover*@bitmap_pgoff: The page