Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\slub.c Create Date:2022-07-28 15:48:37
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Fastpath with forced inlining to produce a kfree and kmem_cache_free that* can perform fastpath freeing without additional function calls.* The fastpath is only possible if we are freeing to the current cpu slab* of this processor

Proto:static __always_inline void do_slab_free(struct kmem_cache *s, struct page *page, void *head, void *tail, int cnt, unsigned long addr)

Type:void

Parameter:

TypeParameterName
struct kmem_cache *s
struct page *page
void *head
void *tail
intcnt
unsigned longaddr
2961  tail_obj = If tail Else head
2964  redo :
2971  Do
2972  tid = Operations with implied preemption/interrupt protection. These* operations can be used without worrying about preemption or interrupt.( Globally unique transaction id )
2973  c = raw_cpu_ptr(cpu_slab)
2974  When IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm',* 0 otherwise.(CONFIG_PREEMPT) && Value for the false possibility is greater at compile time(tid != READ_ONCE( Globally unique transaction id )) cycle
2978  The "volatile" is due to gcc bugs ()
2980  If Value is more likely to compile time(page == The slab from which we are allocating ) Then
2981  set_freepointer(s, tail_obj, Pointer to next available object )
2988  note_cmpxchg_failure("slab_free", s, tid)
2989  Go to redo
2991  stat(s, Free to cpu slab )
2992  Else Slow path handling. This may still be called frequently since objects* have a longer lifetime than the cpu slabs in most processing loads.* So we still attempt to reduce cache line usage. Just take the slab* lock and free the item
Caller
NameDescribe
slab_free