Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\slub.c Create Date:2022-07-28 15:48:34
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Slow path handling. This may still be called frequently since objects* have a longer lifetime than the cpu slabs in most processing loads.* So we still attempt to reduce cache line usage. Just take the slab* lock and free the item

Proto:static void __slab_free(struct kmem_cache *s, struct page *page, void *head, void *tail, int cnt, unsigned long addr)

Type:void

Parameter:

TypeParameterName
struct kmem_cache *s
struct page *page
void *head
void *tail
intcnt
unsigned longaddr
2838  struct kmem_cache_node * n = NULL
2839  flags = flags
2841  stat(s, Freeing not to cpu slab )
2843  If Lock order:* 1. slab_mutex (Global Mutex)* 2. node->list_lock* 3. slab_lock(page) (Only on some arches and for debugging)* slab_mutex* The role of the slab_mutex is to protect the list of all the slabs && Not free_debug_processing(s, page, head, tail, cnt, addr) Then Return
2847  Do
2850  n = NULL
2852  prior = first free object
2853  counters = SLUB
2854  set_freepointer(s, tail, prior)
2855  SLUB = counters
2856  was_frozen = frozen
2857  SLUB -= cnt
2858  If ( Not SLUB || Not prior ) && Not was_frozen Then
2860  If kmem_cache_has_cpu_partial(s) && Not prior Then
2868  frozen = 1
2870  Else
2886  When Not cmpxchg_double_slab(s, page, prior, counters, head, SLUB , "__slab_free") cycle
2891  If Value is more likely to compile time(!n) Then
2897  If frozen && Not was_frozen Then
2898  put_cpu_partial(s, page, 1)
2905  If was_frozen Then stat(s, Freeing to frozen slab )
2907  Return
2910  If Value for the false possibility is greater at compile time(! SLUB && nr_partial >= min_partial) Then Go to slab_empty
2917  If Not kmem_cache_has_cpu_partial(s) && Value for the false possibility is greater at compile time(!prior) Then
2918  remove_full(s, n, page)
2919  add_partial(n, page, Cpu slab was moved to the tail of partials )
2920  stat(s, Freeing moves slab to partial list )
2922  spin_unlock_irqrestore( & list_lock, flags)
2923  Return
2925  slab_empty :
2926  If prior Then
2930  remove_partial(n, page)
2931  stat(s, Freeing removes last object )
2932  Else
2934  remove_full(s, n, page)
2937  spin_unlock_irqrestore( & list_lock, flags)
2938  stat(s, Slab freed to the page allocator )
2939  discard_slab(s, page)
Caller
NameDescribe
do_slab_freeFastpath with forced inlining to produce a kfree and kmem_cache_free that* can perform fastpath freeing without additional function calls.* The fastpath is only possible if we are freeing to the current cpu slab* of this processor