Function report |
Source Code:include\asm-generic\bitops\instrumented-atomic.h |
Create Date:2022-07-28 05:34:15 |
| Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
| home page | Tree |
| Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:st_and_set_bit - Set a bit and return its old value*@nr: Bit to set*@addr: Address to count from* This is an atomic fully-ordered operation (implied full memory barrier).
Proto:static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
Type:bool
Parameter:
| Type | Parameter | Name |
|---|---|---|
| long | nr | |
| volatile unsigned long * | addr |
| 70 | kasan_check_write(addr + BIT_WORD(nr), sizeof(long)) |
| 71 | Return arch_test_and_set_bit(nr, addr) |
| Name | Describe |
|---|---|
| kasan_bitops | |
| test_rhltable | |
| __lc_get | |
| irq_poll_sched | q_poll_sched - Schedule a run of the iopoll handler*@iop: The parent iopoll structure* Description:* Add this irq_poll structure to the pending poll list and trigger the* raise of the blk iopoll softirq. |
| irq_poll_disable | q_poll_disable - Disable iopoll on this @iop*@iop: The parent iopoll structure* Description:* Disable io polling and wait for any pending callbacks to have completed. |
| was_reported | |
| alloc_intr_gate | |
| reserve_perfctr_nmi | |
| reserve_evntsel_nmi | |
| tasklet_kill | |
| try_to_grab_pending | ry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any |
| queue_work_on | queue_work_on - queue work on specific cpu*@cpu: CPU number to execute work on*@wq: workqueue to use*@work: work to queue* We queue the work to a specific CPU, the caller must ensure it* can't go away. |
| queue_work_node | queue_work_node - queue work on a "random" cpu for a given NUMA node*@node: NUMA node that we are targeting the work for*@wq: workqueue to use*@work: work to queue* We queue the work to a "random" CPU within a given NUMA node |
| queue_delayed_work_on | queue_delayed_work_on - queue work on specific CPU after delay*@cpu: CPU number to execute work on*@wq: workqueue to use*@dwork: work to queue*@delay: number of jiffies to wait before queueing |
| queue_rcu_work | queue_rcu_work - queue work after a RCU grace period*@wq: workqueue to use*@rwork: work to queue* Return: %false if @rwork was already pending, %true otherwise |
| __wait_on_bit_lock | |
| warn_no_thread | |
| __irq_wake_thread | |
| irq_matrix_assign | q_matrix_assign - Assign a preallocated interrupt in the local CPU map*@m: Matrix pointer*@bit: Which bit to mark* This should only be used to mark preallocated vectors |
| watchdog_overflow_callback | Callback function for perf event subsystem |
| xol_take_insn_slot | - search for a free slot. |
| wake_oom_reaper | |
| node_reclaim | |
| set_wb_congested | |
| vm_lock_mapping | |
| page_alloc_shuffle | Depending on the architecture, module parameter parsing may run* before, or after the cache detection |
| kasan_save_enable_multi_shot | |
| report_enabled | |
| mm_get_huge_zero_page | |
| __khugepaged_enter | |
| drain_all_stock | Drains all per-CPU charge caches for given root_memcg resp. subtree* of the hierarchy under it. |
| z3fold_free | z3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides |
| z3fold_reclaim_page | z3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different |
| blk_queue_flag_test_and_set | lk_queue_flag_test_and_set - atomically test and set a queue flag*@flag: flag to be set*@q: request queue* Returns the previous value of @flag - 0 if the flag was not set and 1 if* the flag was already set. |
| __blk_mq_tag_busy | If a previously inactive queue goes active, bump the active user count.* We need to do this before try to allocate driver tag, then even if fail* to get tag when first time, the other shared-tag users could reserve* budget for it. |
| __blk_req_zone_write_lock | |
| key_revoke | key_revoke - Revoke a key |
| key_invalidate | key_invalidate - Invalidate a key.*@key: The key to be invalidated.* Mark a key as being invalidated and have it cleaned up immediately. The key* is ignored by all searches and other operations from this point. |
| getoptions | an have zero or more token= options |
| ima_open_policy | ma_open_policy: sequentialize access to the policy file |
| evm_set_key | vm_set_key() - set EVM HMAC key from the kernel*@key: pointer to a buffer with the key data*@size: length of the key data* This function allows setting the EVM HMAC key from the kernel* without using the "encrypted" key subsystem keys |
| wb_start_writeback | |
| dquot_mark_dquot_dirty | Mark dquot dirty in atomic manner, and return it's old dirty flag state |
| warning_issued | |
| test_and_set_bit_le | |
| cpumask_test_and_set_cpu | pumask_test_and_set_cpu - atomically test and set a cpu in a cpumask*@cpu: cpu number (< nr_cpu_ids)*@cpumask: the cpumask pointer* Returns 1 if @cpu is set in old bitmap of @cpumask, else returns 0* test_and_set_bit wrapper for cpumasks. |
| test_and_set_ti_thread_flag | |
| __node_test_and_set |
| Source code conversion tool public plug-in interface | X |
|---|---|
| Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |