函数逻辑报告 |
Source Code:include\asm-generic\atomic-instrumented.h |
Create Date:2022-07-27 06:38:46 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:atomic_read
函数原型:static inline int atomic_read(const atomic_t *v)
返回类型:int
参数:
类型 | 参数 | 名称 |
---|---|---|
const atomic_t * | v |
26 | kasan_check_read(v, v的长度) |
名称 | 描述 |
---|---|
current_is_single_threaded | Returns true if the task does not share ->mm with another thread/process. |
rhashtable_shrink | hashtable_shrink - Shrink hash table while allowing concurrent lookups*@ht: the hash table to shrink* This function shrinks the hash table to fit, i |
refcount_dec_not_one | _dec_not_one - decrement a refcount if it is not 1*@r: the refcount* No atomic_t counterpart, it decrements unless the value is 1, in which case* it will return false |
test_bucket_stats | |
threadfunc | |
test_rht_init | |
fail_dump | |
should_fail | This code is stolen from failmalloc-1.0* http://www.nongnu.org/failmalloc/ |
sbq_wake_ptr | |
sbitmap_queue_wake_all | |
sbitmap_queue_show | |
arch_show_interrupts | /proc/interrupts printing for arch specific interrupts |
arch_irq_stat | |
tboot_wait_for_aps | |
tboot_dying_cpu | |
mce_default_notifier | |
mce_timed_out | Check if a timeout waiting for other CPUs happened. |
mce_start | Start of Monarch synchronization. This waits until all CPUs have* entered the exception handler and then determines if any of them* saw a fatal event that requires panic. Then it executes them* in the entry order.* TBD double check parallel CPU hotunplug |
mce_end | Synchronize between CPUs after main scanning loop.* This invokes the bulk of the Monarch processing. |
mce_adjust_timer_default | |
__wait_for_cpus | |
free_all_child_rdtgrp | |
rmdir_all_sub | Forcibly remove all of subdirectories under root. |
smp_stop_nmi_callback | |
reserve_eilvt_offset | |
kgdb_nmi_handler | |
__kgdb_notify | |
__mmput | |
mm_release | Please note the differences between mmput and mm_release |
copy_process | 创建进程 |
unshare_fd | Unshare file descriptor table if it is being shared |
mm_update_next_owner | A task is exiting. If it owned this mm, find a new owner for the mm. |
tasklet_action_common | |
__sigqueue_alloc | allocate a new signal queue record* - this may be called without locks if and only if t == current, otherwise an* appropriate lock must be held to stop the target task from exiting |
__usermodehelper_disable | __usermodehelper_disable - Prevent new helpers from being started.*@depth: New value to assign to usermodehelper_disabled.* Set usermodehelper_disabled to @depth and wait for running helpers to exit. |
__need_more_worker | Policy functions. These define the policies on how the global worker* pools are managed. Unless noted otherwise, these functions assume that* they're being called with pool->lock held. |
keep_working | Do I need to keep working? Called from currently running workers. |
worker_enter_idle | worker_enter_idle - enter idle state*@worker: worker which is entering idle state*@worker is entering idle state. Update stats and idle timer if* necessary.* LOCKING:* spin_lock_irq(pool->lock). |
flush_workqueue_prep_pwqs | lush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing*@wq: workqueue being flushed*@flush_color: new flush color, < 0 for no-op*@work_color: new work color, < 0 for no-op* Prepare pwqs for workqueue flushing |
put_cred_rcu | The RCU callback to actually dispose of a set of credentials |
__put_cred | __put_cred - Destroy a set of credentials*@cred: The record to release* Destroy a set of credentials on which no references remain. |
exit_creds | Clean up a task's credentials when it exits |
copy_creds | 复制信任 |
commit_creds | mmit_creds - Install new credentials upon the current task*@new: The credentials to be assigned* Install a new set of credentials to the current task, using RCU to replace* the old set. Both the objective and the subjective credentials pointers are |
abort_creds | abort_creds - Discard a set of credentials and unlock the current task*@new: The credentials that were going to be applied* Discard a set of credentials that were under construction and unlock the* current task. |
override_creds | verride_creds - Override the current process's subjective credentials*@new: The credentials to be assigned* Install a set of temporary override subjective credentials on the current* process, returning the old set for later reversion. |
revert_creds | vert_creds - Revert a temporary subjective credentials override*@old: The credentials to be restored* Revert a temporary set of override subjective credentials to an old set,* discarding the override set. |
async_schedule_node_domain | async_schedule_node_domain - NUMA specific version of async_schedule_domain*@func: function to execute asynchronously*@data: data pointer to pass to the function*@node: NUMA node that we want to schedule this on or close to*@domain: the domain |
cpu_report_state | Called to poll specified CPU's state, for example, when waiting for* a CPU to come online. |
cpu_check_up_prepare | If CPU has died properly, set its state to CPU_UP_PREPARE and* return success |
atomic_inc_below | |
__request_module | __request_module - try to load a kernel module*@wait: wait (or not) for the operation to complete*@fmt: printf style format string for the name of the module*@...: arguments as specified in the format string |
nr_iowait_cpu | Consumers of these two interfaces, like for example the cpuidle menu* governor, are using nonsensical data. Preferring shallow idle state selection* for a CPU that has IO-wait which might not even end up running the task when* it does become runnable. |
account_idle_time | Account for idle time.*@cputime: the CPU time spent in idle wait |
cpupri_find | pupri_find - find the best (lowest-pri) CPU in the system*@cp: The cpupri context*@p: The task*@lowest_mask: A mask to fill in with selected CPUs (or NULL)* Note: This function returns the recommended CPUs as calculated during the* current invocation |
__free_domain_allocs | |
claim_allocations | NULL the sd_data elements we've used to build the sched_domain and* sched_group structure so that the subsequent __free_domain_allocs()* will not free the data we're using. |
ipi_sync_rq_state | |
membarrier_private_expedited | |
sync_runqueues_membarrier_state | |
membarrier_register_global_expedited | |
membarrier_register_private_expedited | |
osq_wait_next | Get a stable @node->next pointer, either for unlock() or unqueue() purposes.* Can return NULL in case we were the last queued and we updated @lock instead. |
queued_write_lock_slowpath | queued_write_lock_slowpath - acquire write lock of a queue rwlock*@lock : Pointer to queue rwlock structure |
lock_torture_cleanup | Forward reference. |
hib_wait_io | |
crc32_threadfn | CRC32 update function that runs in its own thread. |
lzo_compress_threadfn | Compression function that runs in its own thread. |
save_image_lzo | save_image_lzo - Save the suspend image data compressed with LZO.*@handle: Swap map handle to use for saving the image.*@snapshot: Image to read data from.*@nr_to_write: Number of pages to save. |
lzo_decompress_threadfn | Deompression function that runs in its own thread. |
load_image_lzo | load_image_lzo - Load compressed image data and decompress them with LZO.*@handle: Swap map handle to use for loading data.*@snapshot: Image to copy uncompressed data into.*@nr_to_read: Number of pages to load. |
printk_safe_log_store | Add a message to per-CPU context-dependent buffer |
__printk_safe_flush | Flush data from the associated per-CPU buffer. The function* can be called either via IRQ work or independently. |
synchronize_hardirq | synchronize_hardirq - wait for pending hard IRQ handlers (on other CPUs)*@irq: interrupt number to wait for* This function waits for any pending hard IRQ handlers for this* interrupt to complete before returning |
synchronize_irq | synchronize_irq - wait for pending IRQ handlers (on other CPUs)*@irq: interrupt number to wait for* This function waits for any pending IRQ handlers for this interrupt* to complete before returning. If you use this function while |
note_interrupt | |
rcu_gp_is_expedited | Should normal grace-period primitives be expedited? Intended for* use within RCU. Note that this function takes the rcu_expedited* sysfs/boot variable and rcu_scheduler_active into account as well* as the rcu_expedite_gp() nesting |
rcu_torture_stats_print | Print torture statistics |
rcu_torture_barrier | kthread function to drive and coordinate RCU barrier testing. |
rcu_torture_cleanup | |
rcu_perf_wait_shutdown | If performance tests complete, wait for shutdown to commence. |
rcu_perf_writer | RCU perf writer kthread. Repeatedly does a grace period. |
rcu_perf_shutdown | RCU perf shutdown kthread. Just waits to be awakened, then shuts* down system. |
rcu_perf_init | |
rcu_dynticks_eqs_online | Reset the current CPU's ->dynticks counter to indicate that the* newly onlined CPU is no longer in an extended quiescent state |
rcu_dynticks_curr_cpu_in_eqs | Is the current CPU in an extended quiescent state?* No ordering, as we are sampling CPU-local information. |
rcu_eqs_special_set | Set the special (bottom) bit of the specified CPU so that it* will take special action (such as flushing its TLB) on the* next exit from an extended quiescent state. Returns true if* the bit was successfully set, or false if the CPU was not in |
rcu_eqs_enter | Enter an RCU extended quiescent state, which can be either the* idle loop or adaptive-tickless usermode execution.* We crowbar the ->dynticks_nmi_nesting field to zero to allow for* the possibility of usermode upcalls having messed up our count |
rcu_nmi_exit_common | If we are returning from the outermost NMI handler that interrupted an* RCU-idle period, update rdp->dynticks and rdp->dynticks_nmi_nesting* to let the RCU grace-period handling know that the CPU is back to* being RCU-idle |
rcu_eqs_exit | Exit an RCU extended quiescent state, which can be either the* idle loop or adaptive-tickless usermode execution.* We crowbar the ->dynticks_nmi_nesting field to DYNTICK_IRQ_NONIDLE to* allow for the possibility of usermode upcalls messing up our count of |
rcu_nmi_enter_common | _nmi_enter_common - inform RCU of entry to NMI context*@irq: Is this call from rcu_irq_enter?* If the CPU was idle from RCU's viewpoint, update rdp->dynticks and* rdp->dynticks_nmi_nesting to let the RCU grace-period handling know* that the CPU is active |
rcu_barrier_trace | Helper function for rcu_barrier() tracing. If tracing is disabled,* the compiler is expected to optimize this away. |
cgroup_destroy_root | |
cgroup_setup_root | |
css_task_iter_advance | |
proc_cgroupstats_show | Display information about each subsystem and each hierarchy |
cgroup_subsys_states_read | |
audit_log_lost | audit_log_lost - conditionally log lost audit message event*@message: the message stating reason for lost audit message* Emit at least 1 message per second, even if audit_rate_check is* throttling.* Always increment the lost messages counter. |
audit_receive_msg | |
kgdb_io_ready | Return true if there is a valid kgdb I/O module |
kgdb_reenter_check | |
kgdb_cpu_enter | |
kgdb_console_write | |
kgdb_schedule_breakpoint | |
getthread | |
kdb_disable_nmi | |
kdb_common_init_state | |
kdb_stub | |
ring_buffer_resize | g_buffer_resize - resize the ring buffer*@buffer: the buffer to resize.*@size: the new size.*@cpu_id: the cpu buffer to resize* Minimum size is 2 * BUF_PAGE_SIZE.* Returns 0 on success and < 0 on failure. |
ring_buffer_lock_reserve | g_buffer_lock_reserve - reserve a part of the buffer*@buffer: the ring buffer to reserve from*@length: the length of the data to reserve (excluding event header)* Returns a reserved event on the ring buffer to copy directly to |
ring_buffer_write | g_buffer_write - write data to the buffer without reserving*@buffer: The ring buffer to write to |
ring_buffer_record_off | g_buffer_record_off - stop all writes into the buffer*@buffer: The ring buffer to stop writes to |
ring_buffer_record_on | g_buffer_record_on - restart writes into the buffer*@buffer: The ring buffer to start writes to |
ring_buffer_record_is_on | g_buffer_record_is_on - return true if the ring buffer can write*@buffer: The ring buffer to see if write is enabled* Returns true if the ring buffer is in a state that it accepts writes. |
ring_buffer_record_is_set_on | g_buffer_record_is_set_on - return true if the ring buffer is set writable*@buffer: The ring buffer to see if write is set enabled* Returns true if the ring buffer is set writable by ring_buffer_record_on(). |
tracing_record_taskinfo_skip | |
function_trace_call | |
start_critical_timing | |
stop_critical_timing | |
ftrace_pop_return_trace | Retrieve a function return address to the trace stack on thread info. |
trace_synth | |
____bpf_send_signal | |
__irq_work_queue_local | Enqueue on current CPU, work must already be claimed and preempt disabled |
irq_work_sync | Synchronize against the irq_work @entry, ensures the entry is not* currently in use. |
stack_map_get_build_id_offset | |
__perf_event_task_sched_out | Called from scheduler to remove the events of the current task,* with interrupts disabled |
__perf_event_task_sched_in | Called from scheduler to add the events of the current task* with interrupts disabled.* We restore the event value and then enable it.* This does not protect us against NMI, but enable()* sets the enabled bit in the control field of event _before_ |
perf_mmap_close | A buffer can be mmap()ed multiple times; either directly through the same* event, or through other events by use of perf_event_set_output().* In order to undo the VM accounting done by perf_mmap() we need to destroy |
perf_event_task | |
perf_event_comm | |
perf_event_namespaces | |
perf_event_mmap | |
perf_event_ksymbol | |
perf_event_bpf_event | |
__perf_event_overflow | Generic event overflow handling, sampling. |
account_event | |
perf_event_set_output | |
perf_aux_output_begin | This is called before hardware starts writing to the AUX area to* obtain an output handle and make sure there's room in the buffer |
perf_event_max_stack_handler | Used for sysctl_perf_event_max_stack and* sysctl_perf_event_max_contexts_per_stack. |
uprobe_munmap | Called in context of a munmap of a vma. |
xol_take_insn_slot | - search for a free slot. |
padata_do_parallel | padata_do_parallel - padata parallelization function*@ps: padatashell*@padata: object to be parallelized*@cb_cpu: pointer to the CPU that the serialization callback function should* run on. If it's not in the serial cpumask of @pinst* (i |
static_key_count | There are similar definitions for the !CONFIG_JUMP_LABEL case in jump_label |
static_key_slow_inc | |
static_key_enable_cpuslocked | |
static_key_disable_cpuslocked | |
oom_killer_disable | m_killer_disable - disable OOM killer*@timeout: maximum timeout to wait for oom victims in jiffies* Forces all page allocations to fail rather than trigger OOM killer |
task_will_free_mem | Checks whether the given task is dying or exiting and likely to* release its address space. This means that all threads and processes* sharing the same mm have to be killed or exiting.* Caller has to make sure that task->mm is stable (hold task_lock or |
page_mapped | Return true if this page is mapped into pagetables.* For compound page it returns true if any subpage of compound page is mapped. |
__page_mapcount | Slow path of page_mapcount() for compound pages |
wait_iff_congested | wait_iff_congested - Conditionally wait for a backing_dev to become uncongested or a pgdat to complete writes*@sync: SYNC or ASYNC IO*@timeout: timeout in jiffies* In the event of a congested backing_dev (any backing_dev) this waits* for up to @timeout |
change_pte_range | |
anon_vma_free | |
page_expected_state | A bad page could be due to a number of fields. Instead of multiple branches,* try and check multiple fields with one check. The caller must do a detailed* check if necessary. |
free_pages_check_bad | |
check_new_page_bad | |
swap_use_vma_readahead | |
swapin_nr_pages | |
page_trans_huge_map_swapcount | |
swaps_poll | |
swaps_open | |
__frontswap_curr_pages | |
__frontswap_unuse_pages | |
__mmu_notifier_register | Same as mmu_notifier_register but here the caller must hold the mmap_sem in* write mode. A NULL mn signals the notifier is being registered for itree* mode. |
mmu_notifier_unregister | This releases the mm_count pin automatically and frees the mm* structure if it was the last user of it. It serializes against* running mmu notifiers with SRCU and against mmu_notifier_unregister* with the unregister lock + SRCU |
__mmu_interval_notifier_insert | |
ksm_test_exit | ksmd, and unmerge_and_remove_all_rmap_items(), must not touch an mm's* page tables after it has passed through ksm_exit() - which, if necessary,* takes mmap_sem briefly to serialize against them. ksm_exit() does not set |
__buffer_migrate_page | |
shrink_huge_zero_page_count | |
__split_huge_page_tail | |
total_mapcount | |
page_trans_huge_mapcount | This calculates accurately how many mappings a transparent hugepage* has (unlike page_mapcount() which isn't fully accurate) |
khugepaged_test_exit | |
lock_page_memcg | lock_page_memcg - lock a page->mem_cgroup binding*@page: the page* This function protects unlocked LRU pages from being moved to* another cgroup |
__delete_object | Mark the object as not allocated and schedule RCU freeing via put_object(). |
kmemleak_scan | Scan data sections and all the referenced memory blocks allocated via the* kernel's standard allocators. This function must be called with the* scan_mutex held. |
zpool_unregister_driver | zpool_unregister_driver() - unregister a zpool implementation |
msgctl_info | |
bio_put | _put - release a reference to a bio*@bio: bio to release reference to* Description:* Put a reference to a &struct bio, either one you have gotten with* bio_alloc, bio_get or bio_clone_*. The last put of a bio will free it. |
bio_remaining_done | |
hctx_may_queue | For shared tag users, we track the number of currently active users* and attempt to provide a fair share of the tag depth for each of them. |
atomic_inc_below | Increment 'v', if 'v' is below 'below'. Returns true if we succeeded,* false if 'v' + 1 would be bigger than 'below'. |
blkcg_print_stat | |
blkcg_can_attach | We cannot support shared io contexts, as we have no mean to support* two tasks with the same ioc in two different groups without major rework* of the main cic data structures. For now we allow a task to change |
blkcg_scale_delay | Scale the accumulated delay based on how long it has been since we updated* the delay. We only call this when we are adding delay, in case it's been a* while since we added delay, and when we are checking to see if we need to |
blkcg_maybe_throttle_blkg | This is called when we want to actually walk up the hierarchy and check to* see if we need to throttle, and then actually throttle if there is some* accumulated delay. This should only be called upon return to user space so |
blk_iolatency_enabled | |
__blkcg_iolatency_throttle | |
scale_cookie_change | We scale the qd down faster than we scale up, so we need to use this helper* to adjust the scale_cookie accordingly so we don't prematurely get* scale_cookie at DEFAULT_SCALE_COOKIE and unthrottle too much |
check_scale_change | Check our parent and see if the scale cookie has changed. |
iolatency_check_latencies | |
blkiolatency_timer_fn | |
iolatency_pd_init | |
current_hweight | |
iocg_activate | |
iocg_kick_delay | |
ioc_timer_fn | |
bfq_update_has_short_ttime | |
queue_pm_only_show | |
blk_mq_debugfs_tags_show | |
hctx_active_show | |
proc_key_users_show | |
avc_get_hash_stats | |
selinux_secmark_enabled | selinux_secmark_enabled - Check to see if SECMARK is currently enabled* Description:* This function checks the SECMARK reference counter to see if any SECMARK* targets are currently configured, if the reference counter is greater than |
tomoyo_supervisor | moyo_supervisor - Ask for the supervisor's decision |
tomoyo_read_stat | moyo_read_stat - Read statistic data.*@head: Pointer to "struct tomoyo_io_buffer".* Returns nothing. |
tomoyo_commit_condition | moyo_commit_condition - Commit "struct tomoyo_condition".*@entry: Pointer to "struct tomoyo_condition".* Returns pointer to "struct tomoyo_condition" on success, NULL otherwise.* This function merges duplicated entries. This function returns NULL if |
tomoyo_try_to_gc | moyo_try_to_gc - Try to kfree() an entry.*@type: One of values in "enum tomoyo_policy_id".*@element: Pointer to "struct list_head".* Returns nothing.* Caller holds tomoyo_policy_lock mutex. |
tomoyo_collect_entry | moyo_collect_entry - Try to kfree() deleted elements.* Returns nothing. |
tomoyo_get_group | moyo_get_group - Allocate memory for "struct tomoyo_path_group"/"struct tomoyo_number_group".*@param: Pointer to "struct tomoyo_acl_param".*@idx: Index number.* Returns pointer to "struct tomoyo_group" on success, NULL otherwise. |
tomoyo_get_name | moyo_get_name - Allocate permanent memory for string data.*@name: The string to store into the permernent memory.* Returns pointer to "struct tomoyo_path_info" on success, NULL otherwise. |
ima_rdwr_violation_check | ma_rdwr_violation_check* Only invalidate the PCR for measured files:* - Opening a file for write when already open for read,* results in a time of measure, time of use (ToMToU) error.* - Opening a file for read when already open for write, |
ima_check_last_writer | |
__do_execve_file | sys_execve() executes a new program. |
inode_add_lru | Add inode to LRU if needed (inode is unused and clean).* Needs inode->i_lock held. |
evict_inodes | vict_inodes - evict all evictable inodes for a superblock*@sb: superblock to operate on* Make sure that no inodes with zero refcount are retained |
invalidate_inodes | validate_inodes - attempt to free all inodes on a superblock*@sb: superblock to operate on*@kill_dirty: flag to guide handling of dirty inodes* Attempts to free all inodes for a given superblock. If there were any |
inode_lru_isolate | Isolate the inode from the LRU in preparation for freeing it |
__inode_dio_wait | Direct i/o helper functions |
inode_dio_wait | de_dio_wait - wait for outstanding DIO requests to finish*@inode: inode to wait for* Waits for all pending direct I/O requests to finish so that we can* proceed with a truncate or equivalent operation |
expand_fdtable | Expand the file descriptor table.* This function will allocate a new fdtable and both fd array and fdset, of* the given size.* Return <0 error code on error; 1 on successful completion. |
__fget_light | Lightweight file lookup - no refcnt increment if fd table isn't shared.* You can use this instead of fget if you satisfy all of the following* conditions:* 1) You must call fput_light before exiting the syscall and returning control* to userspace (i |
wb_wait_for_completion | wb_wait_for_completion - wait for completion of bdi_writeback_works*@done: target wb_completion* Wait for one or more work items issued to @bdi with their ->done field* set to @done, which should have been initialized with* DEFINE_WB_COMPLETION() |
writeback_single_inode | Write out an inode's dirty pages. Either the caller has an active reference* on the inode or the inode has I_WILL_FREE set.* This function is designed to be called for writing back one inode which* we go e |
__brelse | Decrement a buffer_head's reference count |
__sync_dirty_buffer | For a data-integrity writeout, we need to wait upon any in-progress I/O* and then start new I/O and then wait upon it. The caller must have a ref on* the buffer_head. |
buffer_busy | ry_to_free_buffers() checks if all the buffers on this particular page* are unused, and releases them if so |
fsnotify_unmount_inodes | snotify_unmount_inodes - an sb is unmounting. handle any watched inodes.*@sb: superblock being unmounted.* Called during unmount with no locks held, so needs to be safe against* concurrent modifiers. We temporarily drop sb->s_inode_list_lock and CAN block. |
fsnotify_destroy_group | Trying to get rid of a group. Remove all marks, flush all events and release* the group reference.* Note that another thread calling fsnotify_clear_marks_by_group() may still* hold a ref to the group. |
fanotify_add_new_mark | |
SYSCALL_DEFINE2 | anotify syscalls |
aio_ring_mremap | |
__get_reqs_available | |
aio_read_events | |
__req_need_defer | |
io_should_wake | |
io_cqring_wait | Wait until events become available, if we don't already have some. The* application must reap them itself, as they reside on the shared cq ring. |
io_wq_can_queue | |
io_wqe_enqueue | |
check_conflicting_open | heck_conflicting_open - see if the given file points to an inode that has* an existing open that would conflict with the* desired lease |
mb_cache_destroy | mb_cache_destroy - destroy cache*@cache: the cache to destroy* Free all entries in cache and cache itself. Caller must make sure nobody* (except shrinker) can reach @cache when calling this. |
zap_threads | |
iomap_page_release | |
iomap_finish_page_writeback | |
iomap_writepage_map | We implement an immediate ioend submission policy here to avoid needing to* chain multiple ioends and hence nest mempool allocations which can violate* forward progress guarantees we need to provide |
invalidate_dquots | Invalidate all dquots on the list |
dqput | Put reference to dquot |
dqget | Get reference to dquot* Locking is slightly tricky here. We are guarded from parallel quotaoff()* destroying our dquot by:* a) checking for quota flags under dq_list_lock and* b) getting a reference to dquot before we release dq_list_lock |
add_dquot_ref | This routine is guarded by s_umount semaphore |
atomic_fetch_add_unless | atomic_fetch_add_unless - add unless the number is already a given value*@v: pointer of type atomic_t*@a: the amount to add to v...*@u: ...unless v is equal to u.* Atomically adds @a to @v, so long as @v was not already @u.* Returns original value of @v |
atomic_inc_unless_negative | |
atomic_dec_unless_positive | |
atomic_dec_if_positive | |
atomic_long_read | |
static_key_count | |
static_key_enable | |
static_key_disable | |
osq_is_locked | |
mm_tlb_flush_pending | |
mm_tlb_flush_nested | |
PageTransCompoundMap | PageTransCompoundMap is the same as PageTransCompound, but it also* guarantees the primary MMU has the entire compound page mapped* through pmd_trans_huge, which in turn guarantees the secondary MMUs* can also map the entire compound page |
refcount_read | _read - get a refcount's value*@r: the refcount* Return: the refcount's value |
proc_sys_poll_event | |
page_ref_count | |
page_count | |
get_io_context_active | 取得I/O活跃引用 |
ioc_task_link | |
mapping_writably_mapped | Might pages of this file have been modified in userspace?* Note that i_mmap_writable counts all VM_SHARED vmas: do_mmap_pgoff* marks vma as VM_SHARED if it is shared, and the file was opened for* writing i |
inode_is_open_for_write | |
i_readcount_dec | |
compound_mapcount | |
page_mapcount | |
skb_cloned | 缓存是克隆的? |
skb_header_cloned | skb头是克隆的 |
rt_genid_ipv4 | |
fnhe_genid | |
blk_cgroup_congested | |
blkcg_unuse_delay | |
blkcg_clear_delay | |
rht_grow_above_75 | ht_grow_above_75 - returns true if nelems > 0.75 * table-size*@ht: hash table*@tbl: current table |
rht_shrink_below_30 | ht_shrink_below_30 - returns true if nelems < 0.3 * table-size*@ht: hash table*@tbl: current table |
rht_grow_above_100 | ht_grow_above_100 - returns true if nelems > table-size*@ht: hash table*@tbl: current table |
rht_grow_above_max | 表溢出 |
sk_rcvqueues_full | Take into account size of receive queue and backlog queue* Do not take into account this skb truesize,* to allow even a single big packet to come. |
sk_rmem_alloc_get | 返回读分配 |
sock_skb_set_dropcount | |
reqsk_queue_len | |
reqsk_queue_len_young | |
tcp_fast_path_check | |
tcp_space | Note: caller must be prepared to deal with negative returns |
ib_destroy_usecnt | _destroy_usecnt - Called during destruction to check the usecnt*@usecnt: The usecnt atomic*@why: remove reason*@uobj: The uobject that is destroyed* Non-zero usecnts will block destruction unless destruction was triggered by* a ucontext cleanup. |
sbq_index_atomic_inc | |
sbq_wait_ptr | sbq_wait_ptr() - Get the next wait queue to use for a &struct* sbitmap_queue.*@sbq: Bitmap queue to wait on.*@wait_index: A counter per "user" of @sbq. |
queued_fetch_set_pending_acquire | |
queued_spin_is_locked | queued_spin_is_locked - is the spinlock locked?*@lock: Pointer to queued spinlock structure* Return: 1 if it is locked, 0 otherwise |
queued_spin_value_unlocked | queued_spin_value_unlocked - is the spinlock structure unlocked?*@lock: queued spinlock structure* Return: 1 if it is unlocked, 0 otherwise* N |
queued_spin_is_contended | queued_spin_is_contended - check if the lock is contended*@lock : Pointer to queued spinlock structure* Return: 1 if lock contended, 0 otherwise |
queued_spin_trylock | queued_spin_trylock - try to acquire the queued spinlock*@lock : Pointer to queued spinlock structure* Return: 1 if lock acquired, 0 if failed |
queued_read_trylock | queued_read_trylock - try to acquire read lock of a queue rwlock*@lock : Pointer to queue rwlock structure* Return: 1 if lock acquired, 0 if failed |
queued_write_trylock | queued_write_trylock - try to acquire write lock of a queue rwlock*@lock : Pointer to queue rwlock structure* Return: 1 if lock acquired, 0 if failed |
rcu_check_gp_start_stall | This function checks for grace-period requests that fail to motivate* RCU to come out of its idle mode. |
wbt_inflight | |
selinux_xfrm_enabled | |
xfrm_state_kern | |
dqgrab | |
dquot_is_busy |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |