函数逻辑报告 |
Source Code:include\asm-generic\bitops\instrumented-atomic.h |
Create Date:2022-07-27 06:38:10 |
| Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
| 首页 | 函数Tree |
| 注解内核,赢得工具 | 下载SCCT | English |
函数名称:st_and_set_bit - Set a bit and return its old value*@nr: Bit to set*@addr: Address to count from* This is an atomic fully-ordered operation (implied full memory barrier).
函数原型:static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
返回类型:bool
参数:
| 类型 | 参数 | 名称 |
|---|---|---|
| long | nr | |
| volatile unsigned long * | addr |
| 70 | kasan_check_write(addr + BIT_WORD(nr), sizeof(long)) |
| 71 | 返回:arch_test_and_set_bit(nr, addr) |
| 名称 | 描述 |
|---|---|
| kasan_bitops | |
| test_rhltable | |
| __lc_get | |
| irq_poll_sched | q_poll_sched - Schedule a run of the iopoll handler*@iop: The parent iopoll structure* Description:* Add this irq_poll structure to the pending poll list and trigger the* raise of the blk iopoll softirq. |
| irq_poll_disable | q_poll_disable - Disable iopoll on this @iop*@iop: The parent iopoll structure* Description:* Disable io polling and wait for any pending callbacks to have completed. |
| was_reported | |
| alloc_intr_gate | |
| reserve_perfctr_nmi | |
| reserve_evntsel_nmi | |
| tasklet_kill | |
| try_to_grab_pending | ry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any |
| queue_work_on | queue_work_on - queue work on specific cpu*@cpu: CPU number to execute work on*@wq: workqueue to use*@work: work to queue* We queue the work to a specific CPU, the caller must ensure it* can't go away. |
| queue_work_node | queue_work_node - queue work on a "random" cpu for a given NUMA node*@node: NUMA node that we are targeting the work for*@wq: workqueue to use*@work: work to queue* We queue the work to a "random" CPU within a given NUMA node |
| queue_delayed_work_on | queue_delayed_work_on - queue work on specific CPU after delay*@cpu: CPU number to execute work on*@wq: workqueue to use*@dwork: work to queue*@delay: number of jiffies to wait before queueing |
| queue_rcu_work | queue_rcu_work - queue work after a RCU grace period*@wq: workqueue to use*@rwork: work to queue* Return: %false if @rwork was already pending, %true otherwise |
| __wait_on_bit_lock | |
| warn_no_thread | |
| __irq_wake_thread | |
| irq_matrix_assign | q_matrix_assign - Assign a preallocated interrupt in the local CPU map*@m: Matrix pointer*@bit: Which bit to mark* This should only be used to mark preallocated vectors |
| watchdog_overflow_callback | Callback function for perf event subsystem |
| xol_take_insn_slot | - search for a free slot. |
| wake_oom_reaper | |
| node_reclaim | |
| set_wb_congested | |
| vm_lock_mapping | |
| page_alloc_shuffle | Depending on the architecture, module parameter parsing may run* before, or after the cache detection |
| kasan_save_enable_multi_shot | |
| report_enabled | |
| mm_get_huge_zero_page | |
| __khugepaged_enter | |
| drain_all_stock | Drains all per-CPU charge caches for given root_memcg resp. subtree* of the hierarchy under it. |
| z3fold_free | z3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides |
| z3fold_reclaim_page | z3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different |
| blk_queue_flag_test_and_set | lk_queue_flag_test_and_set - atomically test and set a queue flag*@flag: flag to be set*@q: request queue* Returns the previous value of @flag - 0 if the flag was not set and 1 if* the flag was already set. |
| __blk_mq_tag_busy | If a previously inactive queue goes active, bump the active user count.* We need to do this before try to allocate driver tag, then even if fail* to get tag when first time, the other shared-tag users could reserve* budget for it. |
| __blk_req_zone_write_lock | |
| key_revoke | key_revoke - Revoke a key |
| key_invalidate | key_invalidate - Invalidate a key.*@key: The key to be invalidated.* Mark a key as being invalidated and have it cleaned up immediately. The key* is ignored by all searches and other operations from this point. |
| getoptions | an have zero or more token= options |
| ima_open_policy | ma_open_policy: sequentialize access to the policy file |
| evm_set_key | vm_set_key() - set EVM HMAC key from the kernel*@key: pointer to a buffer with the key data*@size: length of the key data* This function allows setting the EVM HMAC key from the kernel* without using the "encrypted" key subsystem keys |
| wb_start_writeback | |
| dquot_mark_dquot_dirty | Mark dquot dirty in atomic manner, and return it's old dirty flag state |
| warning_issued | |
| test_and_set_bit_le | |
| cpumask_test_and_set_cpu | 自动测试并设置CPU信息 |
| test_and_set_ti_thread_flag | |
| __node_test_and_set | |
| TestSetPageDirty | |
| TestSetPagePinned | |
| TestSetPagePrivate2 | |
| TestSetPageWriteback | |
| TestSetPageMlocked | |
| TestSetPageHWPoison | |
| TestSetPageDoubleMap | |
| wait_on_bit_lock | wait_on_bit_lock - wait for a bit to be cleared, when wanting to set it*@word: the word being waited on, a kernel virtual address*@bit: the bit of the word being waited on*@mode: the task state to sleep in |
| wait_on_bit_lock_io | wait_on_bit_lock_io - wait for a bit to be cleared, when wanting to set it*@word: the word being waited on, a kernel virtual address*@bit: the bit of the word being waited on*@mode: the task state to sleep in* Use the standard hashed waitqueue table to |
| wait_on_bit_lock_action | wait_on_bit_lock_action - wait for a bit to be cleared, when wanting to set it*@word: the word being waited on, a kernel virtual address*@bit: the bit of the word being waited on*@action: the function used to sleep, which may take special actions*@mode: |
| tasklet_schedule | |
| tasklet_hi_schedule | |
| netif_dormant_on | _dormant_on - mark device as dormant.*@dev: network device* Mark device as dormant (as per RFC2863).* The dormant state indicates that the relevant interface is not* actually in a condition to pass packets (i.e., it is not 'up') but is |
| lc_try_lock_for_transaction | lc_try_lock_for_transaction - can be used to stop lc_get() from changing the tracked set*@lc: the lru cache to operate on* Allows (expects) the set to be "dirty". Note that the reference counts and* order on the active and lru lists may still change |
| xprt_test_and_set_connected | |
| xprt_test_and_set_connecting | |
| xprt_set_bound | |
| xprt_test_and_set_binding | |
| test_set_buffer_dirty | |
| test_set_buffer_req |
| 源代码转换工具 开放的插件接口 | X |
|---|---|
| 支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |