函数逻辑报告 |
Source Code:arch\x86\include\asm\processor.h |
Create Date:2022-07-27 06:39:10 |
| Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
| 首页 | 函数Tree |
| 注解内核,赢得工具 | 下载SCCT | English |
函数名称:cpu_relax
函数原型:static __always_inline void cpu_relax(void)
返回类型:void
参数:无
| 名称 | 描述 |
|---|---|
| set_bits_ll | |
| clear_bits_ll | |
| raid6_choose_gen | |
| serial_putchar | These functions are in .inittext so they can be used to signal* error during initialization. |
| default_do_nmi | |
| mach_get_cmos_time | |
| native_cpu_up | |
| native_apic_wait_icr_idle | |
| calibrate_APIC_clock | |
| __xapic_wait_icr_idle | |
| early_serial_putc | |
| amd_flush_garts | |
| kvm_async_pf_task_wake | |
| panic_smp_self_stop | Stop ourself in panic -- architecture code may override this |
| try_to_grab_pending | ry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any |
| __queue_work | |
| __task_rq_lock | __task_rq_lock - lock the rq @p resides on. |
| task_rq_lock | ask_rq_lock - lock p->pi_lock and lock the rq @p resides on. |
| do_task_dead | |
| __cond_resched_lock | __cond_resched_lock() - if a reschedule is pending, drop the given lock,* call schedule, and on return reacquire the lock |
| cpu_idle_poll | |
| osq_wait_next | Get a stable @node->next pointer, either for unlock() or unqueue() purposes.* Can return NULL in case we were the last queued and we updated @lock instead. |
| osq_lock | |
| queued_spin_lock_slowpath | queued_spin_lock_slowpath - acquire the queued spinlock*@lock: Pointer to queued spinlock structure*@val: Current value of the queued spinlock 32-bit word* (queue tail, pending bit, lock value)* fast : slow : unlock* : :* uncontended (0,0,0) -:--> (0,0,1) |
| rt_mutex_adjust_prio_chain | Adjust the priority chain |
| power_down | power_down - Shut the machine down for hibernation.* Use the platform driver, if configured, to put the system into the sleep* state corresponding to hibernation, or try to power it off or reboot,* depending on the value of hibernation_mode. |
| console_trylock_spinning | sole_trylock_spinning - try to get console_lock by busy waiting* This allows to busy wait for the console_lock when the current* owner is running in specially marked sections. It means that* the current owner is running and cannot reschedule until it |
| __synchronize_hardirq | |
| irq_finalize_oneshot | Oneshot interrupts keep the irq line masked until the threaded* handler finished. unmask if the interrupt has not been disabled and* is marked MASKED. |
| lock_timer_base | We are using hashed locking: Holding per_cpu(timer_bases[x]).lock means* that all timers which are tied to this base are locked, and the base itself* is locked too.* So __run_timers/migrate_timers can safely modify all timers which could |
| acct_get | |
| cgroup_rstat_flush_locked | see cgroup_rstat_flush() |
| stop_machine_yield | |
| cpu_stop_queue_two_works | |
| stop_machine_from_inactive_cpu | stop_machine_from_inactive_cpu - stop_machine() from inactive CPU*@fn: the function to run*@data: the data ptr for the @fn()*@cpus: the cpus to run the @fn() on (NULL = any online cpu)* This is identical to stop_machine() but can be called from a CPU which |
| wait_for_kprobe_optimizer | Wait for completing optimization and unoptimization |
| kdb_dump_stack_on_cpu | |
| kgdb_cpu_enter | |
| vkdb_printf | |
| kdb_reboot | kdb_reboot - This function implements the 'reboot' command. Reboot* the system immediately, or loop for ever on failure. |
| kdb_kbd_cleanup_state | Best effort cleanup of ENTER break codes on leaving KDB. Called on* exiting KDB, when we know we processed an ENTER or KP ENTER scan* code. |
| irq_work_sync | Synchronize against the irq_work @entry, ensures the entry is not* currently in use. |
| wake_up_page_bit | |
| get_ksm_page | get_ksm_page: checks if the page indicated by the stable node* is still its ksm page, despite having held no reference to it.* In which case we can trust the content of the page, and it* returns the gotten page; but if the page has now been zapped, |
| __cmpxchg_double_slab | Interrupts must be disabled (for the fallback code to work right) |
| cmpxchg_double_slab | |
| __get_z3fold_header | |
| ioc_release_fn | Slow path for ioc release in put_io_context(). Performs double-lock* dancing to unlink all icq's and then frees ioc. |
| blk_poll | lk_poll - poll for IO completions*@q: the queue*@cookie: cookie passed back at IO submission time*@spin: whether to spin for completions* Description:* Poll for completions on the passed in queue. Returns number of* completed entries found |
| blkcg_destroy_blkgs | lkcg_destroy_blkgs - responsible for shooting down blkgs*@blkcg: blkcg of interest* blkgs should be removed while holding both q and blkcg locks |
| throtl_pending_timer_fn | |
| __d_lookup_rcu | __d_lookup_rcu - search for a dentry (racy, store-free)*@parent: parent dentry*@name: qstr of name we wish to find*@seqp: returns d_seq value at the point where the dentry was found* Returns: dentry, or NULL* __d_lookup_rcu is the dcache lookup function |
| start_dir_add | |
| __mnt_want_write | __mnt_want_write - get write access to a mount without freeze protection*@m: the mount on which to take a write* This tells the low-level filesystem that a write is about to be performed to* it, and makes sure that writes are allowed (mnt it read-write) |
| __ns_get_path | |
| io_ring_ctx_wait_and_kill | |
| io_worker_spin_for_work | |
| get_cached_acl | |
| __read_seqcount_begin | __read_seqcount_begin - begin a seq-read critical section (without barrier)*@s: pointer to seqcount_t* Returns: count to be passed to read_seqcount_retry* __read_seqcount_begin is like read_seqcount_begin, but has no smp_rmb()* barrier |
| hrtimer_cancel_wait_running | |
| task_get_css | ask_get_css - find and get the css for (task, subsys)*@task: the target task*@subsys_id: the target subsystem ID* Find the css for the (@task, @subsys_id) combination, increment a* reference on and return it. This function is guaranteed to return a |
| lock_cmos | All of these below must be called with interrupts off, preempt* disabled, etc. |
| vdso_read_begin | |
| mcs_spin_unlock | Releases the lock. The caller should pass in the corresponding node that* was used to acquire the lock. |
| 源代码转换工具 开放的插件接口 | X |
|---|---|
| 支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |