Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\sched\membarrier.c Create Date:2022-07-28 09:45:29
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:membarrier_private_expedited

Proto:static int membarrier_private_expedited(int flags)

Type:int

Parameter:

TypeParameterName
intflags
136  mm = mm
138  If flags & MEMBARRIER_FLAG_SYNC_CORE Then
139  If Not IS_ENABLED(CONFIG_FOO) evaluates to 1 if CONFIG_FOO is set to 'y' or 'm',* 0 otherwise.(CONFIG_ARCH_HAS_MEMBARRIER_SYNC_CORE) Then Return -EINVAL
141  If Not (atomic_read( & membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_SYNC_CORE_READY) Then Return -EPERM
144  Else
145  If Not (atomic_read( & membarrier_state) & MEMBARRIER_STATE_PRIVATE_EXPEDITED_READY) Then Return -EPERM
150  If atomic_read( & *@mm_users: The number of users including userspace. * Use mmget()/mmget_not_zero()/mmput() to modify. When this * drops to 0 (i.e. when the task exits and there are no other * temporary reference holders), we also release a reference on *@mm_count (which ) == 1 || num_online_cpus() == 1 Then Return 0
157  smp_mb()
159  If Not zalloc_cpumask_var( & tmpmask, GFP_KERNEL) Then Return -ENOMEM
162  cpus_read_lock()
163  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
164  for_each_online_cpu(cpu)
175  If cpu == These macros fold the SMP functionality into a single CPU system() Then Continue
177  p = fetch RCU-protected pointer for dereferencing(curr)
178  If p && mm == mm Then __cpumask_set_cpu(cpu, tmpmask)
181  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
183  Even if we don't have any preemption, we need preempt disable/enable* to be barriers, so that we don't have things like get_user/put_user* that can cause faults and scheduling migrate into our preempt-protected* region.()
184  smp_call_function_many(tmpmask, ipi_mb, NULL, 1)
185  preempt_enable()
187  free_cpumask_var(tmpmask)
188  cpus_read_unlock()
195  smp_mb()
197  Return 0
Caller
NameDescribe
SYSCALL_DEFINE2sys_membarrier - issue memory barriers on a set of threads*@cmd: Takes command values defined in enum membarrier_cmd