Function report |
Source Code:kernel\locking\qspinlock.c |
Create Date:2022-07-28 09:51:42 |
| Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
| home page | Tree |
| Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:queued_spin_lock_slowpath - acquire the queued spinlock*@lock: Pointer to queued spinlock structure*@val: Current value of the queued spinlock 32-bit word* (queue tail, pending bit, lock value)* fast : slow : unlock* : :* uncontended (0,0,0) -:--> (0,0,1)
Proto:void queued_spin_lock_slowpath(struct qspinlock *lock, unsigned int val)
Type:void
Parameter:
| Type | Parameter | Name |
|---|---|---|
| struct qspinlock * | lock | |
| unsigned int | val |
| 320 | BUILD_BUG_ON - break compile if a condition is true(FIXME: This should be fixed in the arch's Kconfig >= (1U << _Q_TAIL_CPU_BITS)) |
| 322 | If pv_enabled() Then Go to pv_queue |
| 325 | If virt_spin_lock(lock) Then Return |
| 334 | If val == _Q_PENDING_VAL Then |
| 335 | cnt = _Q_PENDING_LOOPS |
| 336 | val = atomic_cond_read_relaxed( & val, (VAL != _Q_PENDING_VAL) || !cnt--) |
| 343 | If val & ~_Q_LOCKED_MASK Then Go to queue |
| 363 | If Not (val & _Q_PENDING_MASK) Then lear_pending - clear the pending bit.*@lock: Pointer to queued spinlock structure* *,1,* -> *,0,* |
| 366 | Go to queue |
| 380 | If val & _Q_LOCKED_MASK Then atomic_cond_read_acquire( & val, !(VAL & _Q_LOCKED_MASK)) |
| 390 | Return |
| 396 | queue : |
| 398 | pv_queue : |
| 399 | node = this_cpu_ptr( & mcs) |
| 400 | idx = nesting count, see qspinlock.c ++ |
| 401 | tail = We must be able to distinguish between no-tail and the tail at 0:0,* therefore increment the cpu number by one. |
| 412 | If Value for the false possibility is greater at compile time(idx >= MAX_NODES) Then |
| 414 | When Not queued_spin_trylock - try to acquire the queued spinlock*@lock : Pointer to queued spinlock structure* Return: 1 if lock acquired, 0 if failed cycle |
| 415 | cpu_relax() |
| 416 | Go to release |
| 419 | node = grab_mcs_node(node, idx) |
| 433 | 1 if lock acquired = 0 |
| 434 | next = NULL |
| 435 | pv_init_node(node) |
| 442 | If queued_spin_trylock - try to acquire the queued spinlock*@lock : Pointer to queued spinlock structure* Return: 1 if lock acquired, 0 if failed Then Go to release |
| 450 | smp_wmb() |
| 460 | next = NULL |
| 466 | If old & _Q_TAIL_MASK Then |
| 467 | prev = decode_tail(old) |
| 470 | WRITE_ONCE(next, node) |
| 472 | pv_wait_node(node, prev) |
| 507 | If val = pv_wait_head_or_lock(lock, node) Then Go to locked |
| 510 | val = atomic_cond_read_acquire( & val, !(VAL & _Q_LOCKED_PENDING_MASK)) |
| 512 | locked : |
| 534 | If (val & _Q_TAIL_MASK) == tail Then |
| 535 | If atomic_try_cmpxchg_relaxed( & val, & val, _Q_LOCKED_VAL) Then Go to release |
| 552 | smp_store_release() provides a memory barrier to ensure all* operations in the critical section has been completed before* unlocking.( & 1 if lock acquired ) |
| 553 | pv_kick_node(lock, next) |
| 555 | release : |
| 559 | __this_cpu_dec(count) |
| Name | Describe |
|---|---|
| queued_spin_lock | queued_spin_lock - acquire a queued spinlock*@lock: Pointer to queued spinlock structure |
| Source code conversion tool public plug-in interface | X |
|---|---|
| Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |