Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\smp.c Create Date:2022-07-28 10:55:56
Last Modify:2020-03-17 15:12:54 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:smp_call_function_many(): Run a function on a set of other CPUs

Proto:void smp_call_function_many(const struct cpumask *mask, smp_call_func_t func, void *info, bool wait)

Type:void

Parameter:

TypeParameterName
const struct cpumask *mask
smp_call_func_tfunc
void *info
boolwait
416  this_cpu = smp_processor_id()
424  WARN_ON_ONCE(cpu_online(this_cpu) && Some architectures don't define arch_irqs_disabled(), so even if either* definition would be fine we need to use different ones for the time being* to avoid build issues.() && !Low level drivers may need that to know if they can schedule in* their unblank() callback or not. So let's export it. && !early_boot_irqs_disabled)
433  WARN_ON_ONCE(!in_task())
436  cpu = cpumask_first_and - return the first cpu from *srcp1 & *srcp2*@src1p: the first input*@src2p: the second input* Returns >= nr_cpu_ids if no cpus set in both. See also cpumask_next_and().(mask, cpu_online_mask)
437  If cpu == this_cpu Then cpu = pumask_next_and - get the next cpu in *src1p & *src2p*@n: the cpu prior to the place to search (ie. return will be > @n)*@src1p: the first cpumask pointer*@src2p: the second cpumask pointer* Returns >= nr_cpu_ids if no further cpus set in both.
441  If cpu >= nr_cpu_ids Then Return
445  next_cpu = pumask_next_and - get the next cpu in *src1p & *src2p*@n: the cpu prior to the place to search (ie. return will be > @n)*@src1p: the first cpumask pointer*@src2p: the second cpumask pointer* Returns >= nr_cpu_ids if no further cpus set in both.
446  If next_cpu == this_cpu Then next_cpu = pumask_next_and - get the next cpu in *src1p & *src2p*@n: the cpu prior to the place to search (ie. return will be > @n)*@src1p: the first cpumask pointer*@src2p: the second cpumask pointer* Returns >= nr_cpu_ids if no further cpus set in both.
450  If next_cpu >= nr_cpu_ids Then
451  smp_call_function_single - Run a function on a specific CPU*@func: The function to run. This must be fast and non-blocking.*@info: An arbitrary pointer to pass to the function.*@wait: If true, wait until function has completed on other CPUs.
452  Return
455  cfd = this_cpu_ptr( & cfd_data)
457  pumask_and - *dstp = *src1p & *src2p*@dstp: the cpumask result*@src1p: the first input*@src2p: the second input* If *@dstp is empty, returns 0, else returns 1
458  __cpumask_clear_cpu(this_cpu, cpumask)
461  If Value for the false possibility is greater at compile time(!pumask_weight - Count of bits in *srcp*@srcp: the cpumask to count bits (< nr_cpu_ids) in.) Then Return
464  pumask_clear - clear all cpus (< nr_cpu_ids) in a cpumask*@dstp: the cpumask pointer
466  csd = per_cpu_ptr(csd, cpu)
468  csd_lock(csd)
469  If wait Then flags |= CSD_FLAG_SYNCHRONOUS
471  func = func
472  info = info
473  If llist_add - add a new entry*@new: new entry to be added*@head: the head for your lock-less list* Returns true if the list was empty prior to adding this entry. Then __cpumask_set_cpu(cpu, cpumask_ipi)
478  arch_send_call_function_ipi_mask(cpumask_ipi)
480  If wait Then
484  csd = per_cpu_ptr(csd, cpu)
485  sd_lock/csd_unlock used to serialize access to per-cpu csd resources* For non-synchronous ipi calls the csd can still be in use by the* previous function call. For multi-cpu calls its even more interesting
Caller
NameDescribe
update_closid_rmidUpdate the PGR_ASSOC MSR on all cpus in @cpu_mask,* Per task closids/rmids must have been set up before calling this function.
set_cache_qos_cfg
reset_all_ctrls
update_domains
membarrier_global_expedited
membarrier_private_expedited
sync_runqueues_membarrier_state