函数逻辑报告 |
Source Code:include\linux\cpumask.h |
Create Date:2022-07-27 06:38:53 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:pumask_empty - *srcp == 0*@srcp: the cpumask to that all cpus < nr_cpu_ids are clear.
函数原型:static inline bool cpumask_empty(const struct cpumask *srcp)
返回类型:bool
参数:
类型 | 参数 | 名称 |
---|---|---|
const struct cpumask * | srcp |
名称 | 描述 |
---|---|
domain_remove_cpu | |
update_domains | |
rdtgroup_locksetup_enter | dtgroup_locksetup_enter - Resource group enters locksetup mode*@rdtgrp: resource group requested to enter locksetup mode* A resource group enters locksetup mode to reflect that it would be used* to represent a pseudo-locked region and is in the process of |
activate_managed | |
test_nmi_ipi | |
remote_ipi | |
wq_select_unbound_cpu | When queueing an unbound work item to a wq, prefer local CPU if allowed* by wq_unbound_cpumask. Otherwise, round robin among the allowed ones to* avoid perturbing sensitive tasks. |
wq_calc_node_cpumask | wq_calc_node_cpumask - calculate a wq_attrs' cpumask for the specified node*@attrs: the wq_attrs of the default pwq of the target workqueue*@node: the target NUMA node*@cpu_going_down: if >= 0, the CPU to consider as offline*@cpumask: outarg, the |
apply_wqattrs_prepare | allocate the attrs and pwqs for later installation |
workqueue_set_unbound_cpumask | workqueue_set_unbound_cpumask - Set the low-level unbound cpumask*@cpumask: the cpumask to set* The low-level workqueues cpumask is a global cpumask that limits* the affinity of all unbound workqueues. This function check the @cpumask |
build_balance_mask | Build the balance mask; it contains only those CPUs that can arrive at this* group and should be considered to continue balancing |
build_sched_domains | Build sched domains for a given set of CPUs and attach the sched domains* to the individual CPUs |
housekeeping_init | |
housekeeping_setup | |
__free_percpu_irq | Internal function to unregister a percpu irqaction. |
irq_move_masked_irq | |
irq_matrix_alloc_managed | q_matrix_alloc_managed - Allocate a managed interrupt in a CPU map*@m: Matrix pointer*@cpu: On which CPU the interrupt should be allocated |
tick_install_broadcast_device | Conditionally install/replace broadcast device |
tick_device_uses_broadcast | Check, if the device is disfunctional and a place holder, which* needs to be handled by the broadcast device. |
tick_do_broadcast | Broadcast the event to the cpus, which are set in the mask (mangled). |
tick_broadcast_control | k_broadcast_control - Enable/disable or force broadcast mode*@mode: The selected broadcast mode* Called when the system enters a state where affected tick devices* might stop. Note: TICK_BROADCAST_FORCE cannot be undone. |
tick_resume_broadcast | |
validate_change | validate_change() - Used to validate that any proposed cpuset change* follows the structural rules for cpusets |
update_parent_subparts_cpumask | pdate_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset*@cpuset: The cpuset that requests change in partition root state*@cmd: Partition root state change command*@newmask: Optional new cpumask for partcmd_update*@tmp: Temporary addmask |
update_cpumasks_hier | pdate_cpumasks_hier - Update effective cpumasks and tasks in the subtree*@cs: the cpuset to consider*@tmp: temp variables for calculating effective_cpus & partition setup* When congifured cpumask is changed, the effective cpumasks of this cpuset |
update_cpumask | pdate_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it*@cs: the cpuset to consider*@trialcs: trial cpuset*@buf: buffer of cpu numbers written to this cpuset |
update_relax_domain_level | |
update_flag | pdate_flag - read a 0 or a 1 in a file and update associated flag* Call with cpuset_mutex held. |
update_prstate | pdate_prstate - update partititon_root_state* Call with cpuset_mutex held. |
cpuset_can_attach | Called by cgroups to determine if a cpuset is usable; cpuset_mutex held |
remove_tasks_in_empty_cpuset | If CPU and/or memory hotplug handlers, below, unplug any CPUs* or memory nodes, we need to walk over the cpuset hierarchy,* removing that CPU or node from all cpusets. If this removes the* last CPU or node from a cpuset, then move the tasks in the empty |
hotplug_update_tasks_legacy | |
hotplug_update_tasks | |
cpuset_hotplug_update_tasks | puset_hotplug_update_tasks - update tasks in a cpuset for hotunplug*@cs: cpuset in interest*@tmp: the tmpmasks structure pointer* Compare @cs's cpu and mem masks against top_cpuset and if some have gone* offline, update @cs accordingly |
kswapd | The background pageout daemon, started as a kernel thread* from the init process.* This basically trickles out pages so that we have _some_* free memory available even if there is no other activity* that frees anything up |
find_next_best_node | d_next_best_node - find the next node that should appear in a given node's fallback list*@node: node whose fallback list we're appending*@used_node_mask: nodemask_t of already used nodes* We use a number of factors to determine which is the next node that |
__blk_mq_run_hw_queue | |
policy_is_inactive |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |