函数逻辑报告

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:lib\percpu_counter.c Create Date:2022-07-27 08:09:00
Last Modify:2020-03-12 14:18:49 Copyright©Brick
首页 函数Tree
注解内核,赢得工具下载SCCTEnglish

函数名称:This function is both preempt and irq safe. The former is due to explicit* preemption disable. The latter is guaranteed by the fact that the slow path* is explicitly protected by an irq-safe spinlock whereas the fast patch uses

函数原型:void percpu_counter_add_batch(struct percpu_counter *fbc, s64 amount, s32 batch)

返回类型:void

参数:

类型参数名称
struct percpu_counter *fbc
s64amount
s32batch
86  禁止抢占()
87  count等于Operations for contexts that are safe from preemption/interrupts. These* operations verify that preemption is disabled.( * counters)加amount
88  如果count大于等于batchcount小于等于负batch
90  raw_spin_lock_irqsave( & lock, flags)
91  count加等于count
92  __this_cpu_sub( * counters, count - amount)
93  raw_spin_unlock_irqrestore( & lock, flags)
94  否则
95  this_cpu_add( * counters, amount)
97  禁用抢占和中断()
调用者
名称描述
__fprop_inc_percpuEvent of type pl happened
__fprop_inc_percpu_maxLike __fprop_inc_percpu() except that event is counted only if the given* type has fraction smaller than @max_frac/FPROP_FRAC_BASE
blkg_rwstat_addlkg_rwstat_add - add a value to a blkg_rwstat*@rwstat: target blkg_rwstat*@op: REQ_OP and flags*@val: value to add* Add @val to @rwstat. The counters are chosen according to @rw. The* caller is responsible for synchronizing calls to this function.