函数逻辑报告 |
Source Code:include\linux\sched\signal.h |
Create Date:2022-07-27 06:42:31 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:signal_pending
函数原型:static inline int signal_pending(struct task_struct *p)
返回类型:int
参数:
类型 | 参数 | 名称 |
---|---|---|
struct task_struct * | p |
349 | 返回:此条件成立可能性小(为编译器优化)(test_tsk_thread_flag(p, signal pending )) |
名称 | 描述 |
---|---|
do_read | |
copy_process | 创建进程 |
do_wait | |
ptrace_peek_siginfo | |
wants_signal | Test if P wants to take SIG. After we've checked all threads with this,* it's equivalent to finding no threads not blocking SIG. Any threads not* blocking SIG were ruled out because they are not running and already* have pending signals |
retarget_shared_pending | It could be that complete_signal() picked us to notify about the* group-wide signal. Other threads should be notified now to take* the shared signals in @which since we will not. |
exit_signals | |
__set_task_blocked | |
sys_pause | |
sigsuspend | |
do_wait_intr | Note! These two wait functions are entered with the* case), so there is no race with testing the wakeup* condition in the caller before they add the wait* entry to the wake queue. |
do_wait_intr_irq | |
__rt_mutex_slowlock | __rt_mutex_slowlock() - Perform the wait-wake-try-to-take loop*@lock: the rt_mutex to take*@state: the state the task should block in (TASK_INTERRUPTIBLE* or TASK_UNINTERRUPTIBLE)*@timeout: the pre-initialized and started timer, or NULL for none*@waiter: |
rcu_gp_fqs_loop | Loop doing repeated quiescent-state forcing until the grace period ends. |
rcu_gp_kthread | Body of kthread that handles grace periods. |
msleep_interruptible | msleep_interruptible - sleep waiting for signals*@msecs: Time in milliseconds to sleep for |
do_nanosleep | |
do_cpu_nanosleep | |
futex_wait | |
handle_early_requeue_pi_wakeup | handle_early_requeue_pi_wakeup() - Detect early wakeup on the initial futex*@hb: the hash_bucket futex_q was original enqueued on*@q: the futex_q woken while waiting to be requeued*@key2: the futex_key of the requeue target futex*@timeout: the timeout |
ring_buffer_wait | g_buffer_wait - wait for input to the ring buffer*@buffer: buffer to wait on*@cpu: the cpu buffer to wait on*@full: wait until a full page is available, if @cpu != RING_BUFFER_ALL_CPUS* If @cpu == RING_BUFFER_ALL_CPUS then the task will wake up as soon |
do_check | |
uprobe_deny_signal | If we are singlestepping, then ensure this thread is not connected to* non-fatal signals until completion of singlestep. When xol insn itself* triggers the signal, restart the original insn even if the task is |
mm_take_all_locks | This operation locks against the VM for all pte/vma/mm related* operations that could ever happen on a certain mm. This includes* vmtruncate, try_to_unmap, and all page faults.* The caller must take the mmap_sem in write mode before calling |
try_to_unuse | If the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false |
set_max_huge_pages | |
unmerge_ksm_pages | Though it's very tempting to unmerge rmap_items from stable tree rather* than check every pte of a given vma, the locking doesn't quite work for* that - an rmap_item is assigned to the stable tree after inserting ksm* page and upping mmap_sem |
mem_cgroup_resize_max | |
mem_cgroup_force_empty | Reclaims as many pages from the given memcg as possible.* Caller is responsible for holding css reference for memcg. |
memory_high_write | |
memory_max_write | |
scan_should_stop | Memory scanning is a long process and it needs to be interruptable. This* function checks whether such interrupt condition occurred. |
do_msgsnd | |
do_msgrcv | |
do_semtimedop | |
wq_sleep | Puts current task to sleep. Caller must hold queue lock. After return* lock isn't held. |
blk_mq_poll_hybrid_sleep | |
pipe_write | |
wait_for_partner | |
filldir | |
filldir64 | |
compat_filldir | |
do_select | |
core_sys_select | We can actually return ERESTARTSYS instead of EINTR, but I'd* like to be certain this leads to no problems. So I return* EINTR just for safety.* Update: ERESTARTSYS breaks at least the xview clock binary, so |
do_poll | |
compat_core_sys_select | We can actually return ERESTARTSYS instead of EINTR, but I'd* like to be certain this leads to no problems. So I return* EINTR just for safety.* Update: ERESTARTSYS breaks at least the xview clock binary, so |
splice_from_pipe_next | splice_from_pipe_next - wait for some data to splice from*@pipe: pipe to splice from*@sd: information about the splice operation* Description:* This function will wait for some data and return a positive* value (one) if pipe buffers are available |
wait_for_space | |
ipipe_prep | Make sure there's data to read. Wait for input if we can, otherwise* return an appropriate error. |
opipe_prep | Make sure there's writeable room. Wait for room if we can, otherwise* return an appropriate error. |
inotify_read | |
fanotify_read | |
ep_poll | p_poll - Retrieves ready events, and delivers them to the caller supplied* event buffer.*@ep: Pointer to the eventpoll context.*@events: Pointer to the userspace buffer where the ready events should be* stored. |
signalfd_dequeue | |
eventfd_read | |
eventfd_write | |
handle_userfault | The locking rules involved in returning VM_FAULT_RETRY depending on* FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and* FAULT_FLAG_KILLABLE are not straightforward |
userfaultfd_ctx_read | |
SYSCALL_DEFINE6 | |
COMPAT_SYSCALL_DEFINE6 | |
io_sq_thread | |
io_cqring_wait | Wait until events become available, if we don't already have some. The* application must reap them itself, as they reside on the shared cq ring. |
io_worker_handle_work | |
io_wqe_worker | |
dump_interrupted | |
fatal_signal_pending | |
signal_pending_state |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |