函数逻辑报告 |
Source Code:include\linux\cpumask.h |
Create Date:2022-07-27 06:38:53 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:测试CPU信息
函数原型:static inline int cpumask_test_cpu(int cpu, const struct cpumask *cpumask)
返回类型:int
参数:
类型 | 参数 | 名称 |
---|---|---|
int | cpu | |
const struct cpumask * | cpumask |
名称 | 描述 |
---|---|
cpumask_local_spread | pumask_local_spread - select the i'th cpu with local numa cpu's first*@i: index number*@node: local numa_node* This function selects an online CPU according to a numa aware policy;* local cpus are returned first, followed by non-local ones, then it |
mce_device_remove | |
get_domain_from_cpu | |
update_closid_rmid | Update the PGR_ASSOC MSR on all cpus in @cpu_mask,* Per task closids/rmids must have been set up before calling this function. |
set_cache_qos_cfg | |
reset_all_ctrls | |
add_rmid_to_limbo | |
update_domains | |
impress_friends | |
do_boot_cpu | NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad* (ie clustered apic addressing mode), this is a LOGICAL apic ID.* Returns zero if CPU booted OK, else error code from* ->wakeup_secondary_cpu. |
native_cpu_up | |
assign_vector_locked | |
assign_managed_vector | |
wq_select_unbound_cpu | When queueing an unbound work item to a wq, prefer local CPU if allowed* by wq_unbound_cpumask. Otherwise, round robin among the allowed ones to* avoid perturbing sensitive tasks. |
cpudl_find | pudl_find - find the best (later-dl) CPU in the system*@cp: the cpudl max-heap context*@p: the task*@later_mask: a mask to fill in with the selected CPUs (or NULL)* Returns: int - CPUs were found |
rq_attach_root | |
build_overlap_sched_groups | |
build_sched_groups | ld_sched_groups will build a circular linked list of the groups* covered by the given span, will set each group's ->cpumask correctly,* and will initialize their ->sgc.* Assumes the sched_domain tree is fully constructed |
build_sched_domains | Build sched domains for a given set of CPUs and attach the sched domains* to the individual CPUs |
cpufreq_this_cpu_can_update | pufreq_this_cpu_can_update - Check if cpufreq policy can be updated.*@policy: cpufreq policy to check.* Return 'true' if:* - the local and remote CPUs share @policy,* - dvfs_possible_from_any_cpu is set in @policy and the local CPU is not going |
housekeeping_test_cpu | |
irq_percpu_is_enabled | q_percpu_is_enabled - Check whether the per cpu irq is enabled*@irq: Linux irq number to check for* Must be called from a non migratable context. Returns the enable* state of a per cpu interrupt on the current cpu. |
handle_percpu_devid_irq | handle_percpu_devid_irq - Per CPU local irq handler with per cpu dev ids*@desc: the interrupt description structure for this irq* Per CPU interrupts on SMP machines without locking requirements |
irq_needs_fixup | For !GENERIC_IRQ_EFFECTIVE_AFF_MASK this looks at general affinity mask |
irq_restore_affinity_of_irq | |
ipi_get_hwirq | pi_get_hwirq - Get the hwirq associated with an IPI to a cpu*@irq: linux irq number*@cpu: the target cpu* When dealing with coprocessors IPI, we need to inform the coprocessor of* the hwirq it needs to use to receive and send IPIs. |
ipi_send_verify | |
profile_tick | |
tick_check_percpu | |
tick_device_uses_broadcast | Check, if the device is disfunctional and a place holder, which* needs to be handled by the broadcast device. |
tick_do_broadcast | Broadcast the event to the cpus, which are set in the mask (mangled). |
tick_resume_check_broadcast | This is called from tick_resume_local() on a resuming CPU. That's* called from the core resume function, tick_unfreeze() and the magic XEN* resume hackery.* In none of these cases the broadcast device mode can change and the |
smp_call_function_any | smp_call_function_any - Run a function on any of the given cpus*@mask: The mask of cpus it can run on.*@func: The function to run. This must be fast and non-blocking.*@info: An arbitrary pointer to pass to the function. |
on_each_cpu_mask | _each_cpu_mask(): Run a function on processors specified by* cpumask, which may include the local processor.*@mask: The set of cpus to run on (only runs on online subset).*@func: The function to run. This must be fast and non-blocking. |
on_each_cpu_mask | Note we still need to test the mask even for UP* because we actually can get an empty mask from* code that on SMP might call us without the local* CPU in the mask. |
multi_cpu_stop | This is the cpu_stop function which stops the CPU. |
ring_buffer_wait | g_buffer_wait - wait for input to the ring buffer*@buffer: buffer to wait on*@cpu: the cpu buffer to wait on*@full: wait until a full page is available, if @cpu != RING_BUFFER_ALL_CPUS* If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon |
ring_buffer_poll_wait | g_buffer_poll_wait - poll on buffer input*@buffer: buffer to wait on*@cpu: the cpu buffer to wait on*@filp: the file descriptor*@poll_table: The poll descriptor* If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon |
ring_buffer_resize | g_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure. |
ring_buffer_lock_reserve | g_buffer_lock_reserve - reserve a part of the buffer*@buffer: the ring buffer to reserve from*@length: the length of the data to reserve (excluding event header)* Returns a reserved event on the ring buffer to copy directly to |
ring_buffer_write | g_buffer_write - write data to the buffer without reserving*@buffer: The ring buffer to write to |
ring_buffer_record_disable_cpu | g_buffer_record_disable_cpu - stop all writes into the cpu_buffer*@buffer: The ring buffer to stop writes to.*@cpu: The CPU buffer to stop* This prevents all writes to the buffer. Any attempt to write* to the buffer after this will fail and return NULL. |
ring_buffer_record_enable_cpu | g_buffer_record_enable_cpu - enable writes to the buffer*@buffer: The ring buffer to enable writes*@cpu: The CPU to enable.* Note, multiple disables will need the same number of enables* to truly enable the writing (much like preempt_disable). |
ring_buffer_oldest_event_ts | g_buffer_oldest_event_ts - get the oldest event timestamp from the buffer*@buffer: The ring buffer*@cpu: The per CPU buffer to read from. |
ring_buffer_bytes_cpu | g_buffer_bytes_cpu - get the number of bytes consumed in a cpu buffer*@buffer: The ring buffer*@cpu: The per CPU buffer to read from. |
ring_buffer_entries_cpu | g_buffer_entries_cpu - get the number of entries in a cpu buffer*@buffer: The ring buffer*@cpu: The per CPU buffer to get the entries from. |
ring_buffer_overrun_cpu | g_buffer_overrun_cpu - get the number of overruns caused by the ring* buffer wrapping around (only if RB_FL_OVERWRITE is on).*@buffer: The ring buffer*@cpu: The per CPU buffer to get the number of overruns from |
ring_buffer_commit_overrun_cpu | g_buffer_commit_overrun_cpu - get the number of overruns caused by* commits failing due to the buffer wrapping around while there are uncommitted* events, such as during an interrupt storm |
ring_buffer_dropped_events_cpu | g_buffer_dropped_events_cpu - get the number of dropped events caused by* the ring buffer filling up (only if RB_FL_OVERWRITE is off).*@buffer: The ring buffer*@cpu: The per CPU buffer to get the number of overruns from |
ring_buffer_read_events_cpu | g_buffer_read_events_cpu - get the number of events successfully read*@buffer: The ring buffer*@cpu: The per CPU buffer to get the number of events read |
ring_buffer_peek | g_buffer_peek - peek at the next event to be read*@buffer: The ring buffer to read*@cpu: The cpu to peak at*@ts: The timestamp counter of this event |
ring_buffer_consume | g_buffer_consume - return an event and consume it*@buffer: The ring buffer to get the next event from*@cpu: the cpu to read the buffer from*@ts: a variable to store the timestamp (may be NULL)*@lost_events: a variable to store if events were lost (may be |
ring_buffer_read_prepare | g_buffer_read_prepare - Prepare for a non consuming read of the buffer*@buffer: The ring buffer to read from*@cpu: The cpu buffer to iterate over*@flags: gfp flags to use for memory allocation* This performs the initial preparations necessary to iterate |
ring_buffer_size | g_buffer_size - return the size of the ring buffer (in bytes)*@buffer: The ring buffer. |
ring_buffer_reset_cpu | g_buffer_reset_cpu - reset a ring buffer per CPU buffer*@buffer: The ring buffer to reset a per cpu buffer of*@cpu: The CPU buffer to be reset |
ring_buffer_empty_cpu | g_buffer_empty_cpu - is a cpu buffer of a ring buffer empty?*@buffer: The ring buffer*@cpu: The CPU buffer to test |
ring_buffer_alloc_read_page | g_buffer_alloc_read_page - allocate a page to read from buffer*@buffer: the buffer to allocate for |
ring_buffer_read_page | g_buffer_read_page - extract a page from the ring buffer*@buffer: buffer to extract from*@data_page: the page to use allocated from ring_buffer_alloc_read_page*@len: amount to extract*@cpu: the cpu of the buffer to extract |
trace_rb_cpu_prepare | We only allocate new buffers, never free them if the CPU goes down.* If we were to free the buffer, then the user would lose any trace that was in* the buffer. |
test_cpu_buff_start | |
tracing_cpumask_write | |
tracing_resize_ring_buffer | |
swevent_hlist_get_cpu | |
perf_pmu_register | |
padata_do_parallel | padata_do_parallel - padata parallelization function*@ps: padatashell*@padata: object to be parallelized*@cb_cpu: pointer to the CPU that the serialization callback function should* run on. If it's not in the serial cpumask of @pinst* (i |
__blk_mq_run_hw_queue | |
__blk_mq_delay_run_hw_queue | |
blk_mq_map_swqueue | |
stop_cpus | |
set_cpus_allowed_ptr |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |