Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmscan.c Create Date:2022-07-28 14:18:51
Last Modify:2022-05-23 13:41:30 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Determine how aggressively the anon and file LRU lists should be* scanned

Proto:static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, unsigned long *nr)

Type:void

Parameter:

TypeParameterName
struct lruvec *lruvec
struct scan_control *sc
unsigned long *nr
2247  memcg = lruvec_memcg(lruvec)
2248  swappiness = mem_cgroup_swappiness(memcg)
2249  reclaim_stat = reclaim_stat
2251  denominator = 0
2252  pgdat = lruvec_pgdat(lruvec)
2260  If Not Can pages be swapped as part of reclaim? || mem_cgroup_get_nr_swap_pages(memcg) <= 0 Then
2261  scan_balance = SCAN_FILE
2262  Go to out
2272  If cgroup_reclaim(sc) && Not swappiness Then
2273  scan_balance = SCAN_FILE
2274  Go to out
2282  If Not Scan (total_size >> priority) pages at once && swappiness Then
2283  scan_balance = SCAN_EQUAL
2284  Go to out
2290  If The file pages on the current node are dangerously low Then
2291  scan_balance = SCAN_ANON
2292  Go to out
2299  If There is easily reclaimable cold cache in the current node Then
2300  scan_balance = SCAN_FILE
2301  Go to out
2304  scan_balance = SCAN_FRACT
2310  anon_prio = swappiness
2311  file_prio = 200 - anon_prio
2325  anon = lruvec_lru_size - Returns the number of pages on the given LRU list.*@lruvec: lru vector*@lru: lru to use*@zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list) + lruvec_lru_size - Returns the number of pages on the given LRU list.*@lruvec: lru vector*@lru: lru to use*@zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list)
2327  file = lruvec_lru_size - Returns the number of pages on the given LRU list.*@lruvec: lru vector*@lru: lru to use*@zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list) + lruvec_lru_size - Returns the number of pages on the given LRU list.*@lruvec: lru vector*@lru: lru to use*@zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list)
2330  spin_lock_irq( & Write-intensive fields used by page reclaim )
2331  If Value for the false possibility is greater at compile time(recent_scanned[0] > anon / 4) Then
2332  recent_scanned[0] /= 2
2333  * The pageout code in vmscan.c keeps track of how many of the * mem/swap backed and file backed pages are referenced. * The higher the rotated/scanned ratio, the more valuable * that cache is. * The anon LRU stats live in [0], file LRU stats in [1][0] /= 2
2336  If Value for the false possibility is greater at compile time(recent_scanned[1] > file / 4) Then
2337  recent_scanned[1] /= 2
2338  * The pageout code in vmscan.c keeps track of how many of the * mem/swap backed and file backed pages are referenced. * The higher the rotated/scanned ratio, the more valuable * that cache is. * The anon LRU stats live in [0], file LRU stats in [1][1] /= 2
2346  ap = anon_prio * (recent_scanned[0] + 1)
2347  ap /= * The pageout code in vmscan.c keeps track of how many of the * mem/swap backed and file backed pages are referenced. * The higher the rotated/scanned ratio, the more valuable * that cache is. * The anon LRU stats live in [0], file LRU stats in [1][0] + 1
2349  fp = file_prio * (recent_scanned[1] + 1)
2350  fp /= * The pageout code in vmscan.c keeps track of how many of the * mem/swap backed and file backed pages are referenced. * The higher the rotated/scanned ratio, the more valuable * that cache is. * The anon LRU stats live in [0], file LRU stats in [1][1] + 1
2351  spin_unlock_irq( & Write-intensive fields used by page reclaim )
2353  fraction[0] = ap
2354  fraction[1] = fp
2355  denominator = ap + fp + 1
2356  out :
2358  file = is_file_lru(lru)
2363  lruvec_size = lruvec_lru_size - Returns the number of pages on the given LRU list.*@lruvec: lru vector*@lru: lru to use*@zone_idx: zones to consider (use MAX_NR_ZONES for the whole LRU list)
2364  protection = mem_cgroup_protection(memcg, * Cgroups are not reclaimed below their configured memory.low, * unless we threaten to OOM. If any cgroups are skipped due to * memory.low and nothing was reclaimed, go back for memory.low.)
2367  If protection Then
2397  cgroup_size = mem_cgroup_size(memcg)
2400  cgroup_size = max - return maximum of two values of the same or compatible types*@x: first value*@y: second value(cgroup_size, protection)
2402  scan = lruvec_size - lruvec_size * protection / cgroup_size
2410  scan = max - return maximum of two values of the same or compatible types*@x: first value*@y: second value(scan, SWAP_CLUSTER_MAX)
2411  Else
2412  scan = lruvec_size
2415  scan >>= Scan (total_size >> priority) pages at once
2421  If Not scan && Not mem_cgroup_online(memcg) Then scan = min - return minimum of two values of the same or compatible types*@x: first value*@y: second value(lruvec_size, SWAP_CLUSTER_MAX)
2425  Case scan_balance == SCAN_EQUAL
2427  Break
2428  Case scan_balance == SCAN_FRACT
2436  scan = If mem_cgroup_online(memcg) Then div64_u64 - unsigned 64bit divide with 64bit divisor*@dividend: 64bit dividend*@divisor: 64bit divisor* This implementation is a modified version of the algorithm proposed* by the book 'Hacker's Delight'. The original source and full proof Else DIV64_U64_ROUND_UP(scan * fraction[file], denominator)
2440  Break
2441  Case scan_balance == SCAN_FILE
2442  Case scan_balance == SCAN_ANON
2444  If scan_balance == SCAN_FILE != file Then
2445  lruvec_size = 0
2446  scan = 0
2448  Break
2449  Default
2451  BUG()
2454  nr[lru] = scan
Caller
NameDescribe
shrink_lruvec