Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\events\core.c Create Date:2022-07-28 13:33:30
Last Modify:2022-05-20 07:50:19 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:perf_event_context_sched_out

Proto:static void perf_event_context_sched_out(struct task_struct *task, int ctxn, struct task_struct *next)

Type:void

Parameter:

TypeParameterName
struct task_struct *task
intctxn
struct task_struct *next
3204  ctx = perf_event_ctxp[ctxn]
3208  do_switch = 1
3210  If Value is more likely to compile time(!ctx) Then Return
3213  cpuctx = __get_cpu_context(ctx)
3214  If Not task_ctx Then Return
3217  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
3218  next_ctx = perf_event_ctxp[ctxn]
3219  If Not next_ctx Then Go to unlock
3222  parent = fetch RCU-protected pointer for dereferencing(These fields let us detect when two contexts have both* been cloned (inherited) from a common ancestor.)
3223  next_parent = fetch RCU-protected pointer for dereferencing(These fields let us detect when two contexts have both* been cloned (inherited) from a common ancestor.)
3226  If Not parent && Not next_parent Then Go to unlock
3229  If next_parent == ctx || next_ctx == parent || next_parent == parent Then
3239  raw_spin_lock( & Protect the states of the events in the list,* nr_active, and the list:)
3240  raw_spin_lock_nested( & Protect the states of the events in the list,* nr_active, and the list:, For trivial one-depth nesting of a lock-class, the following* global define can be used. (Subsystems with multiple levels* of nesting should define their own lock-nesting subclasses.))
3242  pmu = pmu
3244  WRITE_ONCE(task, next)
3245  WRITE_ONCE(task, task)
3268  do_switch = 0
3272  raw_spin_unlock( & Protect the states of the events in the list,* nr_active, and the list:)
3273  raw_spin_unlock( & Protect the states of the events in the list,* nr_active, and the list:)
3275  unlock :
3276  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
3278  If do_switch Then
3279  raw_spin_lock( & Protect the states of the events in the list,* nr_active, and the list:)
3280  task_ctx_sched_out(cpuctx, ctx, EVENT_ALL)
3281  raw_spin_unlock( & Protect the states of the events in the list,* nr_active, and the list:)
Caller
NameDescribe
__perf_event_task_sched_outCalled from scheduler to remove the events of the current task,* with interrupts disabled