Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\vmscan.c Create Date:2022-07-28 14:17:23
Last Modify:2022-05-23 13:41:30 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:shrink_slab_memcg

Proto:static unsigned long shrink_slab_memcg(gfp_t gfp_mask, int nid, struct mem_cgroup *memcg, int priority)

Type:unsigned long

Parameter:

TypeParameterName
gfp_tgfp_mask
intnid
struct mem_cgroup *memcg
intpriority
562  freed = 0
565  If Not mem_cgroup_online(memcg) Then Return 0
568  If Not rylock for reading -- returns 1 if successful, 0 if contention Then Return 0
571  map = cu_dereference_protected() - fetch RCU pointer when updates prevented*@p: The pointer to read, prior to dereferencing*@c: The conditions under which the dereference will take place* Return the value of the specified RCU-protected pointer, but omit(shrinker_map, true)
573  If Value for the false possibility is greater at compile time(!map) Then Go to unlock
577  struct shrink_control sc = {gfp_mask = gfp_mask, current node being shrunk (for NUMA aware shrinkers) = nid, current memcg being shrunk (for memcg aware shrinkers) = memcg, }
584  shrinker = dr_find() - Return pointer for given ID
585  If Value for the false possibility is greater at compile time(!shrinker || shrinker == We allow subsystems to populate their shrinker-related* LRU lists before register_shrinker_prepared() is called* for the shrinker, since we don't want to impose* restrictions on their internal registration order) Then
586  If Not shrinker Then lear_bit - Clears a bit in memory*@nr: Bit to clear*@addr: Address to start counting from* This is a relaxed atomic operation (no implied memory barriers).
588  Continue
592  If Not memcg_kmem_enabled() && Not (flags & It just makes sense when the shrinker is also MEMCG_AWARE for now,* non-MEMCG_AWARE shrinker should not have this flag set.) Then Continue
596  ret = do_shrink_slab( & sc, shrinker, priority)
597  If ret == SHRINK_EMPTY Then
598  lear_bit - Clears a bit in memory*@nr: Bit to clear*@addr: Address to start counting from* This is a relaxed atomic operation (no implied memory barriers).
614  smp_mb__after_atomic()
615  ret = do_shrink_slab( & sc, shrinker, priority)
616  If ret == SHRINK_EMPTY Then ret = 0
618  Else memcg_set_shrinker_bit(memcg, nid, i)
621  freed += ret
623  If This is the same regardless of which rwsem implementation that is being used.* It is just a heuristic meant to be called by somebody alreadying holding the* rwsem to see if somebody from an incompatible type is wanting access to the* lock. Then
624  freed = If freed Else 1
625  Break
628  unlock :
629  lease a read lock
630  Return freed
Caller
NameDescribe
shrink_slabshrink_slab - shrink slab caches*@gfp_mask: allocation context*@nid: node whose slab caches to target*@memcg: memory cgroup whose slab caches to target*@priority: the reclaim priority* Call the shrink functions to age shrinkable caches