Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:kernel\workqueue.c Create Date:2022-07-28 09:26:18
Last Modify:2020-03-12 14:18:49 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:__queue_work

Proto:static void __queue_work(int cpu, struct workqueue_struct *wq, struct work_struct *work)

Type:void

Parameter:

TypeParameterName
intcpu
struct workqueue_struct *wq
struct work_struct *work
1396  req_cpu = cpu
1404  lockdep_assert_irqs_disabled()
1406  debug_work_activate(work)
1409  If Value for the false possibility is greater at compile time(hot fields used during command issue, aligned to cacheline & __WQ_DRAINING) && WARN_ON_ONCE(!Test whether @work is being queued from another work executing on the* same workqueue.) Then Return
1412  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
1413  retry :
1414  If req_cpu == WORK_CPU_UNBOUND Then cpu = When queueing an unbound work item to a wq, prefer local CPU if allowed* by wq_unbound_cpumask. Otherwise, round robin among the allowed ones to* avoid perturbing sensitive tasks.
1418  If Not (hot fields used during command issue, aligned to cacheline & WQ_UNBOUND) Then pwq = per_cpu_ptr(I: per-cpu pwqs , cpu)
1420  Else pwq = bound_pwq_by_node - return the unbound pool_workqueue for the given node*@wq: the target workqueue*@node: the node ID* This must be called with any of wq_pool_mutex, wq->mutex or RCU* read locked
1428  last_pool = get_work_pool - return the worker_pool a given work was associated with*@work: the work item of interest* Pools are created and destroyed under wq_pool_mutex, and allows read* access under RCU read lock. As such, this function should be
1429  If last_pool && last_pool != I: the associated pool Then
1432  spin_lock( & he pool lock )
1434  worker = d_worker_executing_work - find worker which is executing a work*@pool: pool of interest*@work: work to find worker for* Find a worker which is executing @work on @pool by searching*@pool->busy_hash which is keyed by the address of @work
1436  If worker && wq == wq Then
1438  Else
1443  Else
1444  spin_lock( & he pool lock )
1455  If Value for the false possibility is greater at compile time(!L: reference count ) Then
1458  cpu_relax()
1459  Go to retry
1462  WARN_ONCE(true, "workqueue: per-cpu pwq for %s on cpu%d has 0 refcnt", I: workqueue name , cpu)
1467  workqueue_queue_work - called when a work gets queued*@req_cpu: the requested cpu*@pwq: pointer to struct pool_workqueue*@work: pointer to struct work_struct* This event occurs when a work is queued immediately or once a* delayed work is actually queued
1469  If WARN_ON(!list_empty - tests whether a list is empty*@head: the list to test.) Then Go to out
1472  nr_in_flight[L: current color ]++
1473  work_flags = work_color_to_flags(L: current color )
1475  If Value is more likely to compile time(L: nr of active works < L: max active works ) Then
1476  workqueue_activate_work - called when a work gets activated*@work: pointer to struct work_struct* This event occurs when a queued work is put on the active queue,* which happens immediately after queueing unless @max_active limit* is reached.
1477  L: nr of active works ++
1478  worklist = L: list of pending works
1479  If list_empty - tests whether a list is empty*@head: the list to test. Then L: watchdog timestamp = jiffies
1481  Else
1482  work_flags |= WORK_STRUCT_DELAYED
1483  worklist = L: delayed works
1486  sert_work - insert a work into a pool*@pwq: pwq @work belongs to*@work: work to insert*@head: insertion point*@extra_flags: extra WORK_STRUCT_* flags to set* Insert @work which belongs to @pwq after @head
1488  out :
1489  spin_unlock( & he pool lock )
1490  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
Caller
NameDescribe
queue_work_onqueue_work_on - queue work on specific cpu*@cpu: CPU number to execute work on*@wq: workqueue to use*@work: work to queue* We queue the work to a specific CPU, the caller must ensure it* can't go away.
queue_work_nodequeue_work_node - queue work on a "random" cpu for a given NUMA node*@node: NUMA node that we are targeting the work for*@wq: workqueue to use*@work: work to queue* We queue the work to a "random" CPU within a given NUMA node
delayed_work_timer_fn
__queue_delayed_work
rcu_work_rcufn
flush_delayed_worklush_delayed_work - wait for a dwork to finish executing the last queueing*@dwork: the delayed work to flush* Delayed timer is cancelled and the pending work is queued for* immediate execution. Like flush_work(), this function only