函数逻辑报告 |
Source Code:include\asm-generic\atomic-long.h |
Create Date:2022-07-27 06:38:50 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:atomic_long_read
函数原型:static inline long atomic_long_read(const atomic_long_t *v)
返回类型:long
参数:
类型 | 参数 | 名称 |
---|---|---|
const atomic_long_t * | v |
522 | 返回:atomic_read(v) |
名称 | 描述 |
---|---|
show_mem | |
percpu_ref_switch_to_atomic_rcu | |
gen_pool_alloc_algo_owner | gen_pool_alloc_algo_owner - allocate special memory from the pool*@pool: pool to allocate from*@size: number of bytes to allocate from the pool*@algo: algorithm passed from caller*@data: data passed to algorithm*@owner: optionally retrieve the chunk owner |
gen_pool_avail | 获得内存池的可用空间 |
check_mm | |
get_work_pwq | |
get_work_pool | get_work_pool - return the worker_pool a given work was associated with*@work: the work item of interest* Pools are created and destroyed under wq_pool_mutex, and allows read* access under RCU read lock. As such, this function should be |
get_work_pool_id | get_work_pool_id - return the worker pool ID a given work is associated with*@work: the work item of interest* Return: The worker_pool ID @work was last associated with.* %WORK_OFFQ_POOL_NONE if none. |
work_is_canceling | |
calc_global_load | alc_load - update the avenrun load estimates 10 ticks after the* CPUs have updated calc_load_tasks.* Called from the global timer code. |
__mutex_owner | Internal helper function; C doesn't allow us to hide it :/* DO NOT USE (outside of mutex code). |
__mutex_trylock_or_owner | Trylock variant that retuns the owning task on failure. |
__mutex_handoff | Give up ownership to a specific task, when @task = NULL, this is equivalent* to a regular unlock. Sets PICKUP on a handoff, clears HANDOF, preserves* WAITERS. Provides RELEASE semantics like a regular unlock, the |
ww_mutex_set_context_fastpath | After acquiring lock with fastpath, where we do not hold wait_lock, set ctx* and wake up any waiters so they can recheck. |
__mutex_unlock_slowpath | |
rwsem_test_oflags | Test the flags in the owner field. |
__rwsem_set_reader_owned | The task_struct pointer of the last owning reader will be left in* the owner field.* Note that the owner value just indicates the task has owned the rwsem* previously, it may not be the real owner or one of the real owners |
rwsem_set_nonspinnable | Set the RWSEM_NONSPINNABLE bits if the RWSEM_READER_OWNED flag* remains set. Otherwise, the operation will be aborted. |
rwsem_owner | Return just the real task structure pointer of the owner |
rwsem_owner_flags | Return the real task structure pointer of the owner and the embedded* flags in the owner. pflags must be non-NULL. |
rwsem_mark_wake | handle the lock release when processes blocked on it that can now run* - if we come here from up_xxxx(), then the RWSEM_FLAG_WAITERS bit must* have been set |
rwsem_try_write_lock | This function must be called with the sem->wait_lock held to prevent* race conditions between checking the rwsem wait list and setting the* If wstate is WRITER_HANDOFF, it will make sure that either the handoff |
rwsem_down_read_slowpath | Wait for the read lock to be granted |
rwsem_down_write_slowpath | Wait until we successfully acquire the write lock |
rcu_torture_stats_print | Print torture statistics |
free_event | Used to free events which have a known refcount of 1, such as in error paths* where the event isn't exposed yet and inherited events. |
perf_mmap | |
do_shrink_slab | |
node_page_state | Determine the per node value of a stat item. |
workingset_eviction | workingset_eviction - note the eviction of a page from memory*@target_memcg: the cgroup that is causing the reclaim*@page: the page being evicted* Returns a shadow entry to be stored in @page->mapping->i_pages in place |
workingset_refault | workingset_refault - evaluate the refault of a previously evicted page*@page: the freshly allocated replacement page*@shadow: shadow entry of the evicted page* Calculates and evaluates the refault distance of the previously* evicted page in the context of |
vmalloc_nr_pages | |
__purge_vmap_area_lazy | Purges all lazily-freed vmap areas. |
get_swap_pages | |
si_swapinfo | |
hugetlb_report_usage | |
clear_hwpoisoned_pages | |
propagate_protected_usage | |
page_counter_set_max | page_counter_set_max - set the maximum number of pages allowed*@counter: counter*@nr_pages: limit to set* Returns 0 on success, -EBUSY if the current number of pages on the* counter already exceeds the specified limit. |
page_counter_set_min | page_counter_set_min - set the amount of protected memory*@counter: counter*@nr_pages: value to set* The caller must serialize invocations on the same counter. |
page_counter_set_low | page_counter_set_low - set the amount of protected memory*@counter: counter*@nr_pages: value to set* The caller must serialize invocations on the same counter. |
memcg_events | |
mem_cgroup_oom_control_read | |
__memory_events_show | |
mem_cgroup_protected | mem_cgroup_protected - check if memory consumption is in the normal range*@root: the top ancestor of the sub-tree being checked*@memcg: the memory cgroup to check* WARNING: This function is not stateless! It can only be used as part |
swap_events_show | |
zs_get_total_pages | |
get_io_context | get_io_context - increment reference count to io_context*@ioc: io_context to get* Increment reference count to @ioc. |
put_io_context | put_io_context - put a reference of io_context*@ioc: io_context to put* Decrement reference count of @ioc and release it if the count reaches* zero. |
ima_show_htable_value | |
__destroy_inode | |
sb_prepare_remount_readonly | |
__ns_get_path | |
fsnotify_unmount_inodes | snotify_unmount_inodes - an sb is unmounting. handle any watched inodes.*@sb: superblock being unmounted.* Called during unmount with no locks held, so needs to be safe against* concurrent modifiers. We temporarily drop sb->s_inode_list_lock and CAN block. |
ep_insert | Must be called with "mtx" held. |
io_account_mem | |
rwsem_is_locked | In all implementations count != 0 means locked |
zone_managed_pages | |
percpu_ref_is_zero | 判断percpu无计数引用 |
totalram_pages | |
get_io_context_active | 取得I/O活跃引用 |
global_numa_state | |
zone_numa_state_snapshot | |
global_zone_page_state | |
global_node_page_state | |
zone_page_state | |
zone_page_state_snapshot | More accurate version that also considers the currently pending* deltas. For that we need to loop over all cpus to find the current* deltas. There is no synchronization so the result cannot be* exactly accurate either. |
get_mm_counter | per-process(per-mm_struct) statistics. |
mm_pgtables_bytes | |
frag_mem_limit | Memory Tracking Functions. |
page_counter_read | |
memcg_page_state | dx can be of type enum memcg_stat_item or node_stat_item.* Keep in sync with memcg_exact_page_state(). |
lruvec_page_state | |
sk_memory_allocated | |
proto_memory_allocated | |
bdi_has_dirty_io | |
nfs_have_writebacks |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |