Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:lib\percpu_counter.c Create Date:2022-07-28 07:10:56
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:This function is both preempt and irq safe. The former is due to explicit* preemption disable. The latter is guaranteed by the fact that the slow path* is explicitly protected by an irq-safe spinlock whereas the fast patch uses

Proto:void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)

Type:void

Parameter:

TypeParameterName
struct percpu_counter *fbc
s64amount
s32batch
86  Even if we don't have any preemption, we need preempt disable/enable* to be barriers, so that we don't have things like get_user/put_user* that can cause faults and scheduling migrate into our preempt-protected* region.()
87  count = Operations for contexts that are safe from preemption/interrupts. These* operations verify that preemption is disabled.( * counters) + amount
88  If count >= batch || count <= -batch Then
90  raw_spin_lock_irqsave( & lock, flags)
91  count += count
92  __this_cpu_sub( * counters, count - amount)
93  raw_spin_unlock_irqrestore( & lock, flags)
94  Else
95  this_cpu_add( * counters, amount)
97  preempt_enable()
Caller
NameDescribe
__fprop_inc_percpuEvent of type pl happened
__fprop_inc_percpu_maxLike __fprop_inc_percpu() except that event is counted only if the given* type has fraction smaller than @max_frac/FPROP_FRAC_BASE