函数逻辑报告 |
Source Code:include\linux\spinlock.h |
Create Date:2022-07-27 06:39:18 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:spin_unlock_irq
函数原型:static __always_inline void spin_unlock_irq(spinlock_t *lock)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
spinlock_t * | lock |
388 | raw_spin_unlock_irq( & rlock) |
名称 | 描述 |
---|---|
copy_sighand | 复制信号句柄 |
copy_process | 创建进程 |
do_group_exit | Take down every thread in the group. This is called by fatal signals* as well as by sys_exit_group (below). |
wait_task_zombie | Handle sys_wait4 work for one task in state EXIT_ZOMBIE. We hold* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have |
wait_task_stopped | wait_task_stopped - Wait for %TASK_STOPPED or %TASK_TRACED*@wo: wait options*@ptrace: is the wait for ptrace*@p: task to wait for* Handle sys_wait4() work for %p in state %TASK_STOPPED or %TASK_TRACED |
wait_task_continued | Handle do_wait work for one task in a live, non-stopped state.* read_lock(&tasklist_lock) on entry. If we return zero, we still hold* the lock and this task is uninteresting. If we return nonzero, we have |
ptrace_freeze_traced | Ensure that nothing can wake it up, even SIGKILL |
ptrace_unfreeze_traced | |
ptrace_peek_siginfo | |
ptrace_resume | |
ptrace_request | |
alloc_uid | |
uid_cache_init | |
calculate_sigpending | |
ptrace_stop | This must be called with current->sighand->siglock held.* This should be the path for all ptrace stops.* We always set current->last_siginfo while stopped here.* That makes it a way to test a stopped process for |
ptrace_notify | |
do_signal_stop | do_signal_stop - handle group stop for SIGSTOP and other stop signals*@signr: signr causing group stop if initiating* If %JOBCTL_STOP_PENDING is not set yet, initiate group stop with @signr* and participate in it |
do_freezer_trap | do_freezer_trap - handle the freezer jobctl trap* Puts the task into frozen state, if only the task is not about to quit.* In this case it drops JOBCTL_TRAP_FREEZE.* CONTEXT:* Must be called with @current->sighand->siglock held, |
get_signal | |
exit_signals | |
__set_current_blocked | |
do_sigpending | |
do_sigtimedwait | do_sigtimedwait - wait for queued signals specified in @which*@which: queued signals to wait for*@info: if non-null, the signal's siginfo is returned here*@ts: upper bound on process time suspension |
kernel_sigaction | 内核信号处理 |
do_sigaction | 信号处理 |
call_usermodehelper_exec_async | This is the task which runs the usermode application |
wq_worker_sleeping | 准备休眠的进程 |
put_pwq_unlocked | put_pwq_unlocked - put_pwq() with surrounding pool lock/unlock*@pwq: pool_workqueue to put (can be %NULL)* put_pwq() with locking. This function also allows %NULL @pwq. |
create_worker | reate_worker - create a new workqueue worker*@pool: pool the new worker will belong to* Create and start a new worker which is attached to @pool.* CONTEXT:* Might sleep. Does GFP_KERNEL allocations.* Return:* Pointer to the newly created worker. |
idle_worker_timeout | |
pool_mayday_timeout | |
maybe_create_worker | maybe_create_worker - create a new worker if necessary*@pool: pool to create a new worker for* Create a new worker for @pool if necessary |
process_one_work | process_one_work - process single work*@worker: self*@work: work to process* Process @work |
worker_thread | |
rescuer_thread | scuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function |
flush_workqueue_prep_pwqs | lush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing*@wq: workqueue being flushed*@flush_color: new flush color, < 0 for no-op*@work_color: new work color, < 0 for no-op* Prepare pwqs for workqueue flushing |
drain_workqueue | drain_workqueue - drain a workqueue*@wq: workqueue to drain* Wait until the workqueue becomes empty. While draining is in progress,* only chain queueing is allowed. IOW, only currently pending or running |
start_flush_work | |
put_unbound_pool | put_unbound_pool - put a worker_pool*@pool: worker_pool to put* Put @pool |
wq_update_unbound_numa | wq_update_unbound_numa - update NUMA affinity of a wq for CPU hot[un]plug*@wq: the target workqueue*@cpu: the CPU coming up or going down*@online: whether @cpu is coming up or going down* This function is to be called from %CPU_DOWN_PREPARE, %CPU_ONLINE |
destroy_workqueue | destroy_workqueue - safely terminate a workqueue*@wq: target workqueue* Safely destroy a workqueue. All work currently pending will be done first. |
wq_worker_comm | sed to show worker information through /proc/PID/{comm,stat,status} |
alloc_pid | 分配进程句柄 |
disable_pid_allocation | |
async_unregister_domain | async_unregister_domain - ensure no more anonymous waiters on this domain*@domain: idle domain to flush out of any async_synchronize_full instances* async_synchronize_{cookie|full}_domain() are not flushed since callers* of these routines should know the |
get_ucounts | |
do_wait_intr_irq | |
do_wait_for_common | |
__wait_for_common | |
rcu_sync_enter | _sync_enter() - Force readers onto slowpath*@rsp: Pointer to rcu_sync structure to use for synchronization* This function is used by updaters who need readers to make use of* a slowpath during the update |
rcu_sync_exit | _sync_exit() - Allow readers back onto fast path after grace period*@rsp: Pointer to rcu_sync structure to use for synchronization* This function is used by updaters who have completed, and can therefore* now allow readers to make use of their fastpaths |
rcu_sync_dtor | _sync_dtor() - Clean up an rcu_sync structure*@rsp: Pointer to rcu_sync structure to be cleaned up |
klp_send_signals | Sends a fake signal to all non-kthread tasks with TIF_PATCH_PENDING set.* Kthreads with TIF_PATCH_PENDING set are woken up. |
__refrigerator | Refrigerator is place where frozen processes are stored :-). |
set_freezable | set_freezable - make %current freezable* Mark %current freezable and enter refrigerator if necessary. |
do_timer_create | Create a POSIX.1b interval timer. |
itimer_delete | rn timer owned by the process, used by exit_itimers |
update_rlimit_cpu | Called after updating RLIMIT_CPU to run cpu timer and update* tsk->signal->posix_cputimers.bases[clock].nextevt expiration cache if* necessary. Needs siglock protection since other code may update the* expiration cache as well. |
do_cpu_nanosleep | |
get_cpu_itimer | |
do_getitimer | |
set_cpu_itimer | |
do_setitimer | |
fill_ac | Write an accounting entry for an exiting process* The acct_process() call is the workhorse of the process* accounting system. The struct acct is built here and then written* into the accounting file. This function should only be called from |
acct_collect | acct_collect - collect accounting information into pacct_struct*@exitcode: task exit code*@group_dead: not 0, if this thread is the last one in the process. |
cgroup_task_count | group_task_count - count the number of tasks in a cgroup.*@cgrp: the cgroup in question |
find_css_set | d_css_set - return a new css_set with one cgroup updated*@old_cset: the baseline css_set*@cgrp: the cgroup to be updated* Return a new css_set that's equivalent to @old_cset, but with @cgrp* substituted into the appropriate hierarchy. |
cgroup_destroy_root | |
cgroup_rm_file | |
rebind_subsystems | |
cgroup_show_path | |
cgroup_setup_root | |
cgroup_do_get_tree | |
cgroup_path_ns | |
task_cgroup_path | ask_cgroup_path - cgroup path of a task in the first cgroup hierarchy*@task: target task*@buf: the buffer to write the path into*@buflen: the length of the buffer* Determine @task's cgroup on the first (the one with the lowest non-zero |
cgroup_migrate_execute | group_taskset_migrate - migrate a taskset*@mgctx: migration context* Migrate tasks in @mgctx as setup by migration preparation functions.* This function fails iff one of the ->can_attach callbacks fails and |
cgroup_migrate_finish | group_migrate_finish - cleanup after attach*@mgctx: migration context* Undo cgroup_migrate_add_src() and cgroup_migrate_prepare_dst(). See* those functions for details. |
cgroup_migrate | group_migrate - migrate a process or task to a cgroup*@leader: the leader of the process or the task to migrate*@threadgroup: whether @leader points to the whole process or a single task*@mgctx: migration context |
cgroup_attach_task | group_attach_task - attach a task or a whole threadgroup to a cgroup*@dst_cgrp: the cgroup to attach to*@leader: the task or the leader of the threadgroup to be attached*@threadgroup: attach the whole threadgroup? |
cgroup_update_dfl_csses | group_update_dfl_csses - update css assoc of a subtree in default hierarchy*@cgrp: root of the subtree to update csses for*@cgrp's control masks have changed and its subtree's css associations* need to be updated accordingly |
cgroup_add_file | |
css_task_iter_start | ss_task_iter_start - initiate task iteration*@css: the css to walk tasks of*@flags: CSS_TASK_ITER_* flags*@it: the task iterator to use* Initiate iteration through the tasks of @css |
css_task_iter_next | ss_task_iter_next - return the next task for the iterator*@it: the task iterator being iterated* The "next" function for task iteration. @it should have been* initialized via css_task_iter_start(). Returns NULL when the iteration* reaches the end. |
css_task_iter_end | ss_task_iter_end - finish task iteration*@it: the task iterator to finish* Finish task iteration started by css_task_iter_start(). |
cgroup_procs_write | |
cgroup_threads_write | |
css_release_work_fn | |
cgroup_create | The returned cgroup is fully initialized including its control mask, but* it isn't associated with its kernfs_node and doesn't have the control* mask applied. |
cgroup_destroy_locked | group_destroy_locked - the first stage of cgroup destruction*@cgrp: cgroup to be destroyed* css's make use of percpu refcnts whose killing latency shouldn't be* exposed to userland and are RCU protected |
proc_cgroup_show | proc_cgroup_show()* - Print task's cgroup paths into seq_file, one line for each hierarchy* - Used for /proc/ |
cgroup_post_fork | group_post_fork - called on a new task after adding it to the task list*@child: the task in question* Adds the task to the list running through its css_set if necessary and* call the subsystem fork() callbacks |
cgroup_exit | group_exit - detach cgroup from exiting task*@tsk: pointer to task_struct of exiting process* Description: Detach cgroup from @tsk. |
cgroup_release | |
cgroup_rstat_flush_locked | see cgroup_rstat_flush() |
cgroup_rstat_flush | group_rstat_flush - flush stats in @cgrp's subtree*@cgrp: target cgroup* Collect all per-cpu stats in @cgrp's subtree into the global counters* and propagate them upwards |
cgroup_rstat_flush_release | group_rstat_flush_release - release cgroup_rstat_flush_hold() |
copy_cgroup_ns | |
cgroup_attach_task_all | group_attach_task_all - attach task 'tsk' to all cgroups of task 'from'*@from: attach to all cgroups of a given task*@tsk: the task to be attached |
cgroup_transfer_tasks | group_trasnsfer_tasks - move tasks from one cgroup to another*@to: cgroup to which the tasks will be moved*@from: cgroup in which the tasks currently reside* Locking rules between cgroup_post_fork() and the migration path* guarantee that, if a task is |
cgroup1_release_agent | Notify userspace when a cgroup is released, by running the* configured release agent with the name of the cgroup (path* relative to the root of cgroup file system) as the argument |
cgroup_enter_frozen | Enter frozen/stopped state, if not yet there. Update cgroup's counters,* and revisit the state of the cgroup, if necessary. |
cgroup_leave_frozen | Conditionally leave frozen/stopped state |
cgroup_do_freeze | Freeze or unfreeze all tasks in the given cgroup. |
update_parent_subparts_cpumask | pdate_parent_subparts_cpumask - update subparts_cpus mask of parent cpuset*@cpuset: The cpuset that requests change in partition root state*@cmd: Partition root state change command*@newmask: Optional new cpumask for partcmd_update*@tmp: Temporary addmask |
update_cpumasks_hier | pdate_cpumasks_hier - Update effective cpumasks and tasks in the subtree*@cs: the cpuset to consider*@tmp: temp variables for calculating effective_cpus & partition setup* When congifured cpumask is changed, the effective cpumasks of this cpuset |
update_cpumask | pdate_cpumask - update the cpus_allowed mask of a cpuset and all tasks in it*@cs: the cpuset to consider*@trialcs: trial cpuset*@buf: buffer of cpu numbers written to this cpuset |
update_nodemasks_hier | pdate_nodemasks_hier - Update effective nodemasks and tasks in the subtree*@cs: the cpuset to consider*@new_mems: a temp variable for calculating new effective_mems* When configured nodemask is changed, the effective nodemasks of this cpuset |
update_nodemask | Handle user request to change the 'mems' memory placement* of a cpuset |
update_flag | pdate_flag - read a 0 or a 1 in a file and update associated flag* Call with cpuset_mutex held. |
cpuset_common_seq_show | These ascii lists should be read in a single call, by using a user* buffer large enough to hold the entire map |
cpuset_css_online | |
cpuset_bind | |
hotplug_update_tasks_legacy | |
hotplug_update_tasks | |
cpuset_hotplug_workfn | CPU / memory hotplug is handled asynchronously. |
current_css_set_read | |
current_css_set_cg_links_read | |
cgroup_css_links_read | |
zap_pid_ns_processes | |
seccomp_set_mode_strict | seccomp_set_mode_strict: internal function for setting strict seccomp* Once current->seccomp.mode is non-zero, it may not be changed.* Returns 0 on success or -EINVAL on failure. |
taskstats_tgid_alloc | |
uprobe_deny_signal | If we are singlestepping, then ensure this thread is not connected to* non-fatal signals until completion of singlestep. When xol insn itself* triggers the signal, restart the original insn even if the task is |
handle_singlestep | Perform required fix-ups and disable singlestep.* Allow pending signals to take effect. |
wait_on_page_bit_common | |
activate_page | |
isolate_lru_page | solate_lru_page - tries to isolate a page from its LRU list*@page: page to isolate from its LRU list* Isolates a @page from an LRU list, clears PageLRU and adjusts the* vmstat statistic corresponding to whatever LRU list the page was on. |
move_pages_to_lru | This moves pages from @list to corresponding LRU list.* We move them the other way if the page is referenced by one or more* processes, from rmap.* If the pages are mostly unmapped, the processing is fast and it is |
shrink_inactive_list | shrink_inactive_list() is a helper for shrink_node(). It returns the number* of reclaimed pages |
shrink_active_list | |
get_scan_count | Determine how aggressively the anon and file LRU lists should be* scanned |
check_move_unevictable_pages | heck_move_unevictable_pages - check pages for evictability and move to* appropriate zone lru list*@pvec: pagevec with lru pages to check* Checks pages for evictability, if an evictable page is in the unevictable |
pagetypeinfo_showfree_print | |
pcpu_balance_workfn | Balance work is used to populate or destroy chunks asynchronously. We* try to keep the number of populated free pages between* PCPU_EMPTY_POP_PAGES_LOW and HIGH for atomic allocations and at most one* empty chunk. |
list_lru_walk_one_irq | |
shadow_lru_isolate | |
munlock_vma_page | munlock_vma_page - munlock a vma page*@page: page to be unlocked, either a normal page or THP page head* returns the size of the page as a page mask (0 for normal page,* HPAGE_PMD_NR - 1 for THP head page) |
__munlock_pagevec | Munlock a batch of pages from the same zone* The work is split to two main phases |
drain_slots_cache_cpu | |
free_swap_slot | |
show_pools | |
reap_alien | Called from cache_reap() to regularly drain alien caches round robin. |
init_cache_node | |
setup_kmem_cache_node | |
drain_cpu_caches | |
drain_freelist | |
__do_tune_cpucache | Always called with the slab_mutex held |
drain_array | Drain an array if it contains any elements taking the node lock only if* necessary. Note that the node listlock also protects the array_cache* if drain_array() is used on the shared array. |
get_slabinfo | |
free_partial | Attempt to free all partial slabs on a node.* This is called from __kmem_cache_shutdown(). We must take list_lock* because sysfs file might still access partial list after the shutdowning. |
mem_cgroup_largest_soft_limit_node | |
unlock_page_lru | |
mem_cgroup_soft_limit_reclaim | |
page_idle_get_page | Idle page tracking only considers user memory pages, for other types of* pages the idle flag is always unset and an attempt to set it is silently* ignored |
percpu_stats_show | |
bio_dirty_fn | _check_pages_dirty() will check that all the BIO's pages are still dirty.* If they are, then fine. If, however, some pages are clean then they must* have been written out during the direct-IO read. So we take another ref on |
queue_max_sectors_store | |
blk_insert_flush | lk_insert_flush - insert a new PREFLUSH/FUA request*@rq: request to insert* To be called from __elv_add_request() for %ELEVATOR_INSERT_FLUSH insertions.* or __blk_mq_run_hw_queue() to dispatch request.*@rq is being submitted |
ioc_clear_queue | _clear_queue - break any ioc association with the specified queue*@q: request_queue being cleared* Walk @q->icq_list and exit all io_cq's. |
ioc_create_icq | _create_icq - create and link io_cq*@ioc: io_context of interest*@q: request_queue of interest*@gfp_mask: allocation mask* Make sure io_cq linking @ioc and @q exists |
blk_mq_requeue_work | |
blk_mq_mark_tag_wait | Mark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting. |
blk_mq_sched_assign_ioc | |
disk_flush_events | disk_flush_events - schedule immediate event checking and flushing*@disk: disk to check and flush events for*@mask: events to flush* Schedule immediate event checking on @disk if not blocked. Events in*@mask are scheduled to be cleared from the driver |
disk_clear_events | disk_clear_events - synchronously check, clear and return pending events*@disk: disk to fetch and clear events from*@mask: mask of events to be fetched and cleared* Disk events are synchronously checked and pending events in @mask |
disk_check_events | |
bsg_set_command_q | |
blkg_destroy_all | lkg_destroy_all - destroy all blkgs associated with a request_queue*@q: request_queue of interest* Destroy all blkgs associated with @q. |
blkcg_reset_stats | |
blkcg_print_blkgs | lkcg_print_blkgs - helper for printing per-blkg data*@sf: seq_file to print to*@blkcg: blkcg of interest*@prfill: fill function to print out a blkg*@pol: policy in question*@data: data to be passed to @prfill*@show_total: to print out sum of prfill return |
blkg_conf_prep | lkg_conf_prep - parse and prepare for per-blkg config update*@blkcg: target block cgroup*@pol: target policy*@input: input string*@ctx: blkg_conf_ctx to be filled* Parse per-blkg config update from @input and initialize @ctx with the* result |
blkg_conf_finish | lkg_conf_finish - finish up per-blkg config update*@ctx: blkg_conf_ctx intiailized by blkg_conf_prep()* Finish up after per-blkg config update. This function must be paired* with blkg_conf_prep(). |
blkcg_print_stat | |
blkcg_destroy_blkgs | lkcg_destroy_blkgs - responsible for shooting down blkgs*@blkcg: blkcg of interest* blkgs should be removed while holding both q and blkcg locks |
blkcg_init_queue | lkcg_init_queue - initialize blkcg part of request queue*@q: request_queue to initialize* Called from blk_alloc_queue_node(). Responsible for initializing blkcg* part of new request_queue @q.* RETURNS:* 0 on success, -errno on failure. |
blkcg_activate_policy | lkcg_activate_policy - activate a blkcg policy on a request_queue*@q: request_queue of interest*@pol: blkcg policy to activate* Activate @pol on @q |
blkcg_deactivate_policy | lkcg_deactivate_policy - deactivate a blkcg policy on a request_queue*@q: request_queue of interest*@pol: blkcg policy to deactivate* Deactivate @pol on @q. Follows the same synchronization rules as* blkcg_activate_policy(). |
throtl_pending_timer_fn | |
blk_throtl_dispatch_work_fn | lk_throtl_dispatch_work_fn - work function for throtl_data->dispatch_work*@work: work item being executed* This function is queued for execution when bio's reach the bio_lists[]* of throtl_data->service_queue. Those bio's are ready and issued by this |
blk_throtl_bio | |
blk_throtl_drain | lk_throtl_drain - drain throttled bios*@q: request_queue to drain throttled bios for* Dispatch all currently throttled bios on @q through ->make_request_fn(). |
iocg_activate | |
ioc_timer_fn | |
ioc_rqos_throttle | |
ioc_rqos_queue_depth_changed | |
ioc_rqos_exit | |
blk_iocost_init | |
ioc_weight_write | |
ioc_qos_write | |
ioc_cost_model_write | |
kyber_get_domain_token | |
bfq_bio_merge | |
bfq_end_wr | |
bfq_dispatch_request | |
bfq_insert_request | |
bfq_exit_queue | |
bfq_init_queue | |
queue_requeue_list_stop | |
blk_pre_runtime_suspend | lk_pre_runtime_suspend - Pre runtime suspend check*@q: the queue of the device* Description:* This function will check if runtime suspend is allowed for the device* by examining if there are any requests pending in the queue |
blk_post_runtime_suspend | lk_post_runtime_suspend - Post runtime suspend processing*@q: the queue of the device*@err: return value of the device's runtime_suspend function* Description:* Update the queue's runtime status according to the return value of the* device's runtime |
blk_pre_runtime_resume | lk_pre_runtime_resume - Pre runtime resume processing*@q: the queue of the device* Description:* Update the queue's runtime status to RESUMING in preparation for the* runtime resume of the device |
blk_post_runtime_resume | lk_post_runtime_resume - Post runtime resume processing*@q: the queue of the device*@err: return value of the device's runtime_resume function* Description:* Update the queue's runtime status according to the return value of the |
blk_set_runtime_active | lk_set_runtime_active - Force runtime status of the queue to be active*@q: the queue of the device* If the device is left runtime suspended during system suspend the resume* hook typically resumes the device and corrects runtime status* accordingly |
selinux_bprm_committed_creds | Clean up the process immediately after the installation of new credentials* due to exec |
de_thread | This function makes sure the current process has its own signal table,* so that flush_signal_handlers can later reset the handlers without* disturbing other processes. (Other processes might share the signal* table via the CLONE_SIGHAND option to clone().) |
pipe_read | |
pipe_write | |
wait_sb_inodes | The @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks |
pin_remove | |
pin_kill | |
ep_poll | p_poll - Retrieves ready events, and delivers them to the caller supplied* event buffer.*@ep: Pointer to the eventpoll context.*@events: Pointer to the userspace buffer where the ready events should be* stored. |
signalfd_poll | |
signalfd_dequeue | |
do_signalfd4 | |
timerfd_read | |
timerfd_show | |
do_timerfd_settime | |
do_timerfd_gettime | |
eventfd_read | |
eventfd_write | |
eventfd_show_fdinfo | |
handle_userfault | The locking rules involved in returning VM_FAULT_RETRY depending on* FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and* FAULT_FLAG_KILLABLE are not straightforward |
userfaultfd_event_wait_completion | |
userfaultfd_release | |
userfaultfd_ctx_read | |
__wake_userfault | |
userfaultfd_show_fdinfo | |
free_ioctx_users | When this function runs, the kioctx has been removed from the "hash table"* and ctx->users has dropped to 0, so we know no more kiocbs can be submitted -* now it's safe to cancel any that need to be. |
user_refill_reqs_available | ser_refill_reqs_available* Called to refill reqs_available when aio_get_req() encounters an* out of space in the completion ring. |
aio_poll_complete_work | |
aio_poll | |
SYSCALL_DEFINE3 | sys_io_cancel:* Attempts to cancel an iocb previously passed to io_submit. If* the operation is successfully cancelled, the resulting event is* copied into the memory pointed to by result without being placed* into the completion queue and 0 is returned |
io_kill_timeouts | |
io_poll_remove_all | |
io_poll_remove | Find a running poll command that matches one specified in sqe->addr,* and remove it if found. |
io_poll_complete_work | |
io_poll_add | |
io_timeout_remove | Remove or update an existing timeout command |
io_timeout | |
io_req_defer | |
io_grab_files | |
io_queue_linked_timeout | |
io_uring_cancel_files | |
__io_worker_unuse | Note: drops the wqe->lock if returning true! The caller must re-acquire* the lock in that case. Some callers need to restart handling if this* happens, so we can't just re-acquire the lock on behalf of the caller. |
io_worker_exit | |
io_worker_handle_work | |
io_wqe_worker | |
io_wq_worker_sleeping | Called when worker is going to sleep. If there are no workers currently* running and we have work pending, wake up a free one or have the manager* set one up. |
create_io_worker | |
io_wq_manager | Manager thread. Tasked with creating new workers, if we need them. |
zap_threads | |
coredump_finish | |
write_sequnlock_irq | |
read_sequnlock_excl_irq | |
kernel_dequeue_signal | |
kernel_signal_stop | |
ptr_ring_full_irq | |
ptr_ring_produce_irq | |
ptr_ring_empty_irq | |
ptr_ring_consume_irq | |
ptr_ring_consume_batched_irq |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |