Function report |
Source Code:kernel\sched\core.c |
Create Date:2022-07-28 09:35:44 |
| Last Modify:2022-05-22 13:40:38 | Copyright©Brick |
| home page | Tree |
| Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:wake_up_process - Wake up a specific process*@p: The process to be woken up.* Attempt to wake up the nominated process and move it to the set of runnable* processes.* Return: 1 if the process was woken up, 0 if it was already running.
Proto:int wake_up_process(struct task_struct *p)
Type:int
Parameter:
| Type | Parameter | Name |
|---|---|---|
| struct task_struct * | p |
| Name | Describe |
|---|---|
| rdtgroup_pseudo_lock_create | dtgroup_pseudo_lock_create - Create a pseudo-locked region*@rdtgrp: resource group to which pseudo-lock region belongs* Called when a resource group in the pseudo-locksetup mode receives a* valid schemata that should be pseudo-locked |
| apm_init | Just start the APM thread. We do NOT want to do APM BIOS* calls from anything but the APM thread, if for no other reason* than the fact that we don't trust the APM BIOS. This way,* most common APM BIOS problems that lead to protection errors |
| rcuwait_wake_up | |
| exit_notify | Send signals to all our closest relatives so that they know* to properly mourn us.. |
| wakeup_softirqd | we cannot loop indefinitely here to avoid userspace starvation,* but we also don't want to introduce a worst case 1/HZ latency* to the pending events, so lets the scheduler to balance* the softirq load for us. |
| wake_up_worker | wake_up_worker - wake up an idle worker*@pool: worker pool to wake worker from* Wake up the first idle worker of @pool.* CONTEXT:* spin_lock_irq(pool->lock). |
| wq_worker_sleeping | wq_worker_sleeping - a worker is going to sleep*@task: task going to sleep* This function is called from schedule() when a busy worker is* going to sleep. |
| create_worker | reate_worker - create a new workqueue worker*@pool: pool the new worker will belong to* Create and start a new worker which is attached to @pool.* CONTEXT:* Might sleep. Does GFP_KERNEL allocations.* Return:* Pointer to the newly created worker. |
| destroy_worker | destroy_worker - destroy a workqueue worker*@worker: worker to be destroyed* Destroy @worker and adjust @pool stats accordingly. The worker should* be idle.* CONTEXT:* spin_lock_irq(pool->lock). |
| send_mayday | |
| init_rescuer | Workqueues which may be used during memory reclaim should have a rescuer* to guarantee forward progress. |
| free_pid | |
| __kthread_create_on_node | |
| kthread_park | kthread_park - park a thread created by kthread_create() |
| kthread_stop | stop a thread |
| __kthread_create_worker | |
| kthread_insert_work | sert @work before @pos in @worker |
| wake_up_q | |
| swake_up_locked | The thing about the wake_up_state() return value; I think we can ignore it.* If for some reason it would return 0, that means the previously waiting* task is already running, so it will observe condition true (or has already). |
| sugov_kthread_create | |
| __ww_mutex_die | Wait-Die; wake a younger waiter context (when locks held) such that it can* die.* Among waiters with context, only the first one can have other locks acquired* already (ctx->acquired > 0), because __ww_mutex_add_waiter() and |
| __ww_mutex_wound | Wound-Wait; wound a younger @hold_ctx if it holds the lock.* Wound the lock holder if there are waiters with older transactions than* the lock holders. Even if multiple waiters may wound the lock holder,* it's sufficient that only one does. |
| __up | |
| rt_mutex_adjust_prio_chain | Adjust the priority chain |
| __irq_wake_thread | |
| __setup_irq | register an interrupt |
| rcutorture_booster_init | |
| rcu_wake_cond | |
| rcu_spawn_gp_kthread | Spawn the kthreads that handle RCU's grace periods. |
| __thaw_task | |
| process_timeout | |
| hrtimer_wakeup | Sleep related functions: |
| cpu_timer_fire | The timer is locked, fire it and arrange for its reload. |
| cgroup_freeze_task | Freeze or unfreeze the task by setting or clearing the JOBCTL_TRAP_FREEZE* jobctl bit. |
| audit_schedule_prune | |
| proc_dohung_task_timeout_secs | Process updating of timeout sysctl |
| ring_buffer_producer | |
| ring_buffer_producer_thread | |
| start_kthread | start_kthread - Kick off the hardware latency sampling/detector kthread* This starts the kernel thread that will sit and sample the CPU timestamp* counter (TSC or similar) and look for potential hardware latencies. |
| __cpu_map_entry_alloc | |
| __cpu_map_flush | |
| dio_bio_end_io | The BIO completion handler simply queues the BIO up for the process-context* handler.* During I/O bi_private points at the dio. After I/O, bi_private is used to* implement a singly-linked list of completed BIOs, at dio->bio_list. |
| io_sq_offload_start | |
| io_worker_release | |
| io_wqe_activate_free_worker | Check head of free list for an available worker. If one isn't available,* caller must wake up the wq manager to create one. |
| io_wqe_wake_worker | We need a worker. If we find a free one, we're good. If not, and we're* below the max number of workers, wake up the manager to create one. |
| create_io_worker | |
| io_wq_create | |
| io_wq_worker_wake | |
| coredump_finish | |
| blk_wake_io_task | |
| klist_release |
| Source code conversion tool public plug-in interface | X |
|---|---|
| Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |