Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmscan.c Create Date:2022-07-28 14:19:56
Last Modify:2022-05-23 13:41:30 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:For kswapd, balance_pgdat() will reclaim pages across a node from zones* that are eligible for use by the caller until at least one zone is* balanced.* Returns the order kswapd finished reclaiming at.

Proto:static int balance_pgdat(pg_data_t *pgdat, int order, int classzone_idx)

Type:int

Parameter:

TypeParameterName
pg_data_t *pgdat
intorder
intclasszone_idx
3577  unsigned long zone_boosts[MAX_NR_ZONES] = {0, }
3580  struct scan_control sc = { This context's GFP mask = GFP_KERNEL, Allocation order = order, Can mapped pages be reclaimed? = 1, }
3586  set_task_reclaim_state(current process, & for recording the reclaimed slab by now )
3587  psi_memstall_enter - mark the beginning of a memory stall section*@flags: flags to handle nested sections* Marks the calling task as being stalled due to a lack of memory,* such as waiting for a refault or performing reclaim.
3588  __fs_reclaim_acquire()
3590  Disable counters
3597  nr_boost_reclaim = 0
3598  When i <= classzone_idx cycle
3599  zone = node_zones + i
3600  If Not Returns true if a zone has pages managed by the buddy allocator.* All the reclaim decisions have to use this function rather than* populated_zone(). If the whole zone is reserved then we can easily* end up with populated_zone() && !managed_zone(). Then Continue
3603  nr_boost_reclaim += watermark_boost
3604  zone_boosts[i] = watermark_boost
3606  boosted = nr_boost_reclaim
3608  restart :
3609  Scan (total_size >> priority) pages at once = The "priority" of VM scanning is how much of the queues we will scan in one* go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the* queues ("queue_length >> 12") during an aging round.
3610  Do
3611  nr_reclaimed = Number of pages freed so far during a call to shrink_zones()
3612  bool raise_priority = true
3616  The highest zone to isolate pages for reclaim from = classzone_idx
3628  If buffer_heads_over_limit Then
3629  When i >= 0 cycle
3646  balanced = Returns true if there is an eligible zone balanced for the request order* and classzone_idx
3647  If Not balanced && nr_boost_reclaim Then
3648  nr_boost_reclaim = 0
3649  Go to restart
3657  If Not nr_boost_reclaim && balanced Then Go to out
3661  If nr_boost_reclaim && Scan (total_size >> priority) pages at once == The "priority" of VM scanning is how much of the queues we will scan in one* go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the* queues ("queue_length >> 12") during an aging round. - 2 Then raise_priority = false
3670  Writepage batching in laptop mode; RECLAIM_WRITE = Not Flag that puts the machine in "laptop mode". Doubles as a timeout in jiffies:* a full sync is triggered after this time elapses without any disk activity. && Not nr_boost_reclaim
3671  Can pages be swapped as part of reclaim? = Not nr_boost_reclaim
3679  age_active_anon(pgdat, & sc)
3685  If Scan (total_size >> priority) pages at once < The "priority" of VM scanning is how much of the queues we will scan in one* go. A value of 12 for DEF_PRIORITY implies that we will scan 1/4096th of the* queues ("queue_length >> 12") during an aging round. - 2 Then Writepage batching in laptop mode; RECLAIM_WRITE = 1
3689  Incremented by the number of inactive pages that were scanned = 0
3690  nr_soft_scanned = 0
3691  nr_soft_reclaimed = mem_cgroup_soft_limit_reclaim(pgdat, Allocation order , This context's GFP mask , & nr_soft_scanned)
3693  Number of pages freed so far during a call to shrink_zones() += nr_soft_reclaimed
3700  If kswapd shrinks a node of pages that are at or below the highest usable* zone that is currently unbalanced.* Returns true if kswapd scanned at least the requested number of pages to* reclaim or if the lack of progress was due to pages under writeback. Then raise_priority = false
3708  If waitqueue_active -- locklessly test for waiters on the queue*@wq_head: the waitqueue to test for waiters* returns true if the wait list is not empty* NOTE: this function is lockless and requires care, incorrect usage _will_ && allow_direct_reclaim(pgdat) Then wake_up_all( & pfmemalloc_wait)
3713  __fs_reclaim_release()
3714  ret = try_to_freeze()
3715  __fs_reclaim_acquire()
3716  If ret || kthread_should_stop - should this kthread return now?* When someone calls kthread_stop() on your kthread, it will be woken* and this will return true. You should then return, and your return* value will be passed through to kthread_stop(). Then Break
3723  nr_reclaimed = Number of pages freed so far during a call to shrink_zones() - nr_reclaimed
3724  nr_boost_reclaim -= min - return minimum of two values of the same or compatible types*@x: first value*@y: second value(nr_boost_reclaim, nr_reclaimed)
3731  If nr_boost_reclaim && Not nr_reclaimed Then Break
3734  If raise_priority || Not nr_reclaimed Then Scan (total_size >> priority) pages at once --
3736  When Scan (total_size >> priority) pages at once >= 1 cycle
3738  If Not Number of pages freed so far during a call to shrink_zones() Then Number of 'reclaimed == 0' runs ++
3741  out :
3743  If boosted Then
3746  When i <= classzone_idx cycle
3747  If Not zone_boosts[i] Then Continue
3751  zone = node_zones + i
3761  wakeup_kcompactd(pgdat, Huge pages are a constant size , classzone_idx)
3764  snapshot_refaults(NULL, pgdat)
3765  __fs_reclaim_release()
3766  psi_memstall_leave - mark the end of an memory stall section*@flags: flags to handle nested memdelay sections* Marks the calling task as no longer stalled due to lack of memory.
3767  set_task_reclaim_state(current process, NULL)
3775  Return Allocation order
Caller
NameDescribe
kswapdThe background pageout daemon, started as a kernel thread* from the init process.* This basically trickles out pages so that we have _some_* free memory available even if there is no other activity* that frees anything up