函数逻辑报告 |
Source Code:include\linux\sched\task.h |
Create Date:2022-07-27 06:41:15 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:get_task_struct
函数原型:static inline struct task_struct *get_task_struct(struct task_struct *t)
返回类型:struct task_struct
参数:
类型 | 参数 | 名称 |
---|---|---|
struct task_struct * | t |
114 | 返回:t |
名称 | 描述 |
---|---|
rdtgroup_move_task | |
_do_fork | 分裂进程 |
mm_update_next_owner | A task is exiting. If it owned this mm, find a new owner for the mm. |
wait_task_zombie | Handle sys_wait4 work for one task in state EXIT_ZOMBIE. We hold* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have |
wait_task_stopped | wait_task_stopped - Wait for %TASK_STOPPED or %TASK_TRACED*@wo: wait options*@ptrace: is the wait for ptrace*@p: task to wait for* Handle sys_wait4() work for %p in state %TASK_STOPPED or %TASK_TRACED |
wait_task_continued | Handle do_wait work for one task in a live, non-stopped state.* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have |
SYSCALL_DEFINE4 | |
find_get_task_by_vpid | |
get_pid_task | |
kthread_stop | 结束线程的运行 |
__smpboot_create_thread | |
wake_q_add | wake_q_add() - queue a wakeup for 'later' waking |
do_sched_setscheduler | |
SYSCALL_DEFINE3 | sys_sched_setattr - same as above, but with extended sched_attr*@pid: the pid in question.*@uattr: structure containing the extended parameters.*@flags: for future extension. |
sched_setaffinity | |
task_non_contending | The utilization of a task cannot be immediately removed from* the rq active utilization (running_bw) when the task blocks |
start_dl_timer | If the entity depleted all its runtime, and if we want it to sleep* while waiting for some new execution time to become available, we* set the bandwidth replenishment timer to the replenishment instant* and try to activate it |
rwsem_mark_wake | handle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set |
rt_mutex_adjust_prio_chain | Adjust the priority chain |
task_blocks_on_rt_mutex | Task blocks on lock.* Prepare waiter and propagate pi chain* This must be called with lock->wait_lock held and interrupts disabled |
remove_waiter | Remove a waiter from a lock and give up* Must be called with lock->wait_lock held and interrupts disabled. I must* have just failed to try_to_take_rt_mutex(). |
rt_mutex_adjust_pi | Recheck the pi chain, in case we got a priority setting* Called from sched_setscheduler |
setup_irq_thread | |
SYSCALL_DEFINE5 | |
__get_task_for_clock | |
mark_wake_futex | The hash bucket lock must be held when this is called.* Afterwards, the futex_q must not be accessed. Callers* must ensure to later call wake_up_q() for the actual* wakeups to occur. |
cgroup_procs_write_start | |
css_task_iter_next | ss_task_iter_next - return the next task for the iterator*@it: the task iterator being iterated* The "next" function for task iteration. @it should have been* initialized via css_task_iter_start(). Returns NULL when the iteration* reaches the end. |
cgroup_transfer_tasks | group_trasnsfer_tasks - move tasks from one cgroup to another*@to: cgroup to which the tasks will be moved*@from: cgroup in which the tasks currently reside* Locking rules between cgroup_post_fork() and the migration path* guarantee that, if a task is |
rcu_lock_break | To avoid extending the RCU grace period for an unbounded amount of time,* periodically exit the critical section and enter a new one.* For preemptible RCU it is sufficient to call rcu_read_unlock in order* to exit the grace period |
probe_wakeup | |
alloc_perf_context | |
find_lively_task_by_vpid | |
perf_remove_from_owner | Remove user event from the owner task. |
perf_event_alloc | Allocate and initialize an event structure |
oom_evaluate_task | |
wake_oom_reaper | |
__oom_kill_process | |
oom_kill_memcg_member | Kill provided task unless it's secured by setting* oom_score_adj to OOM_SCORE_ADJ_MIN. |
out_of_memory | _of_memory - kill the "best" process when we run out of memory*@oc: pointer to struct oom_control* If we run out of memory, we have the choice between either* killing a random task (bad), letting the system crash (worse) |
swap_readpage | |
kernel_migrate_pages | |
kernel_move_pages | Move a list of pages in the address space of the currently executing* process. |
add_to_kill | Schedule a process for later kill.* Uses GFP_ATOMIC allocations to avoid potential recursions in the VM. |
report_access | defers execution because cmdline access can sleep |
yama_task_prctl | yama_task_prctl - check for Yama-specific prctl operations*@option: operation*@arg2: argument*@arg3: argument*@arg4: argument*@arg5: argument* Return 0 on success, -ve on error. -ENOSYS is returned when Yama* does not handle the given option. |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |