Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\sched\topology.c Create Date:2022-07-28 09:42:56
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:Build sched domains for a given set of CPUs and attach the sched domains* to the individual CPUs

Proto:static int build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *attr)

Type:int

Parameter:

TypeParameterName
const struct cpumask *cpu_map
struct sched_domain_attr *attr
1984  alloc_state = sa_none
1987  struct rq * rq = NULL
1988  ret = -ENOMEM
1990  bool has_asym = false
1992  If WARN_ON(pumask_empty - *srcp == 0*@srcp: the cpumask to that all cpus < nr_cpu_ids are clear.) Then Go to error
1995  alloc_state = __visit_domain_allocation_hell( & d, cpu_map)
1996  If alloc_state != sa_rootdomain Then Go to error
1999  tl_asym = Find the sched_domain_topology_level where all CPU capacities are visible* for all CPUs.
2005  sd = NULL
2007  dflags = 0
2009  If tl == tl_asym Then
2010  dflags |= SD_ASYM_CPUCAPACITY
2011  has_asym = true
2014  If WARN_ON(!Ensure topology masks are sane, i.e. there are no conflicts (overlaps) for* any two given CPUs at this (non-NUMA) topology level.) Then Go to error
2017  sd = build_sched_domain(tl, cpu_map, attr, sd, dflags, i)
2019  If tl == sched_domain_topology Then per_cpu_ptr(sd, i) = sd
2021  If flags & SDTL_OVERLAP Then flags |= SD_OVERLAP
2023  If pumask_equal - *src1p == *src2p*@src1p: the first input*@src2p: the second input Then Break
2030  When sd cycle
2031  span_weight = pumask_weight - Count of bits in *srcp*@srcp: the cpumask to count bits (< nr_cpu_ids) in.
2032  If flags & SD_OVERLAP Then
2033  If build_overlap_sched_groups(sd, i) Then Go to error
2035  Else
2043  When i >= 0 cycle
2044  If Not pumask_test_cpu - test for a cpu in a cpumask*@cpu: cpu number (< nr_cpu_ids)*@cpumask: the cpumask pointer* Returns 1 if @cpu is set in @cpumask, else returns 0 Then Continue
2047  When sd cycle
2054  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
2056  rq = cpu_rq(i)
2057  sd = per_cpu_ptr(sd, i)
2060  If cpu_capacity_orig > READ_ONCE(max_cpu_capacity) Then WRITE_ONCE(max_cpu_capacity, cpu_capacity_orig)
2063  Attach the domain 'sd' to 'cpu' as its base domain. Callers must* hold the hotplug lock.
2065  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
2067  If has_asym Then static_branch_inc_cpuslocked( & sched_asym_cpucapacity)
2070  If rq && sched_debug_enabled Then
2071  pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n", cpumask_pr_args - printf args to output a cpumask*@maskp: cpumask to be printed* Can be used to provide arguments for '%*pb[l]' when printing a cpumask.(cpu_map), max_cpu_capacity)
2075  ret = 0
2076  error :
2077  __free_domain_allocs( & d, alloc_state, cpu_map)
2079  Return ret
Caller
NameDescribe
sched_init_domainsSet up scheduler domains and groups. For now this just excludes isolated* CPUs, but could be used to exclude other special cases in the future.
partition_sched_domains_lockedPartition sched domains as specified by the 'ndoms_new'* cpumasks in the array doms_new[] of cpumasks. This compares* doms_new[] to the current sched domain partitioning, doms_cur[].* It destroys each deleted domain and builds each new domain.