Function Report

Linux Kernel (v4.4)

Source File:arch\x86\include\asm\atomic.h Create Date:2016-01-14 09:04:52
Last Modify:2016-01-11 07:01:32 Copyright©Brick
home page Tree
Annotate the kernelChinese

Function Name:atomic_read

Function:static inline __attribute__((always_inline)) int atomic_read(const atomic_t *v)

Return Type:static inline __attribute__((always_inline)) int

Parameter:

Type Parameter NameRemarks
const atomic_t * v pointer of type atomic_t

Function description: read atomic variable

27  Returning READ_ONCE
Caller
Function NameFunction description
__atomic_add_unless add unless the number is already a given value
atomic_dec_if_positive decrement by 1 if old value positive
atomic_long_read
static_key_count
queued_read_can_lock would read_trylock() succeed?
queued_write_can_lock would write_trylock() succeed?
queued_read_trylock try to acquire read lock of a queue rwlock
queued_write_trylock try to acquire write lock of a queue rwlock
wait_on_atomic_t Wait for an atomic_t to become 0
osq_is_locked
mutex_is_locked is the mutex locked
proc_sys_poll_event
PageBuddy
__SetPageBuddy
PageBalloon
__SetPageBalloon
put_page_testzeroDrop a ref, return true if the refcount fell to zero(the page has no users)
page_mapcount
page_count
get_huge_page_tail
get_page
page_mappedReturn true if this page is mapped into pagetables.
mapping_writably_mappedMight pages of this file have been modified in userspace? Note that i_mmap_writable counts all VM_SHARED vmas: do_mmap_pgoff marks vma as VM_SHARED if it is shared, and the file was opened for writing i
inode_is_open_for_write
i_readcount_dec
is_trace_idt_enabled
skb_fclone_busy check if fclone is busy
skb_cloned is the buffer a clone
skb_header_cloned is the header a clone
skb_shared is the buffer shared
rt_genid_ipv4
fnhe_genid
dst_free
sk_del_node_init
sk_nulls_del_node_init_rcu
sk_rcvqueues_fullTake into account size of receive queue and backlog queue Do not take into account this skb truesize, to allow even a single big packet to come.
sk_wmem_alloc_get returns write allocations
sk_rmem_alloc_get returns read allocations
sock_wspace
sock_writeableDefault write policy as shown to user space via poll/select/SIGIO
sock_skb_set_dropcount
reqsk_free
reqsk_queue_len
reqsk_queue_len_young
current_is_single_threadedReturns true if the task does not share->mm with another thread/process.
rht_grow_above_75 returns true if nelems> 0.75* table-size
rht_shrink_below_30 returns true if nelems< 0.3* table-size
rht_grow_above_100 returns true if nelems> table-size
rht_grow_above_max returns true if table is above maximum
rhashtable_shrink Shrink hash table while allowing concurrent lookups
test_bucket_stats
fscache_retrieval_complete Record(partial) completion of a retrieval
tcp_fast_path_check
tcp_spaceNote: caller must be prepared to deal with negative returns
pci_is_enabled
gen_pool_alloc allocate special memory from the pool
gen_pool_avail get available free space of the pool
part_in_flight
get_io_context_active get active reference on ioc
ioc_task_link
fail_dump
should_failThis code is stolen from failmalloc-1.0 http://www.nongnu.org/failmalloc/
freezingCheck if there is a request to freeze a process
test_atomic
blkg_get get a blkg reference
blkg_put put a blkg reference
do_int3May run on IST stack.
arch_show_interrupts/proc/interrupts printing for arch specific interrupts
arch_irq_stat
load_mm_cr4
tboot_wait_for_aps
tboot_cpu_callback
x86_reserve_hardware
x86_add_exclusiveCheck if we can create event of a certain type(that no conflicting events are present).
perf_event_nmi_handler
__intel_shared_reg_get_constraintsmanage allocation of shared extra msr for certain events
uncore_get_constraintgeneric get constraint function for shared match/mask registers.
uncore_pci_remove
__snbep_cbox_get_constraint
snbep_pcu_get_constraint
nhmex_mbox_get_shared_reg
nhmex_rbox_get_constraintEach rbox has 4 event set which monitor PQI port 0~3 or 4~7. An event set consists of 6 events, the 3rd and 4th events in an event set use the same extra register. So an event set uses 5 extra registers.
mce_timed_outCheck if a timeout waiting for other CPUs happened.
mce_startStart of Monarch synchronization. This waits until all CPUs have
mce_endSynchronize between CPUs after main scanning loop. This invokes the bulk of the Monarch processing.
cmci_intel_adjust_timer
thermal_throttle_init_device
nmi_shootdown_cpusHalt all other CPUs, calling the specified function on each of them
smp_stop_nmi_callback
check_tsc_sync_sourceSource CPU calls into this- it waits for the freshly booted target CPU to arrive and then starts the measurement:
check_tsc_sync_targetFreshly booted CPUs call into this:
reserve_eilvt_offset
prepare_ftrace_returnHook the return address and push it in the stack of return addrs in current thread info.
kgdb_nmi_handler
__kgdb_notify
__put_task_struct
mm_releasePlease note the differences between mmput and mm_release. mmput is called whenever we stop holding onto a mm_struct, error success whatever.
copy_processCreate a new process
check_unshare_flagsCheck constraints on flags passed to the unshare system call.
unshare_fdUnshare file descriptor table if it is being shared
mm_update_next_ownerA task is exiting. If it owned this mm, find a new owner for the mm.
tasklet_action
tasklet_hi_action
__sigqueue_allocallocate a new signal queue record- this may be called without locks if and only if t== current, otherwise an appropriate lock must be held to stop the target task from exiting
__request_module try to load a kernel module
__usermodehelper_disable Prevent new helpers from being started.
__need_more_workerPolicy functions. These define the policies on how the global worker pools are managed. Unless noted otherwise, these functions assume that they're being called with pool->lock held.
keep_workingDo I need to keep working? Called from currently running workers.
worker_enter_idle enter idle state
flush_workqueue_prep_pwqs prepare pwqs for workqueue flushing
alloc_pidmap
put_pid
put_cred_rcuThe RCU callback to actually dispose of a set of credentials
__put_cred Destroy a set of credentials
exit_credsClean up a task's credentials when it exits
copy_credsCopy a certificate
commit_creds Install new credentials upon the current task
abort_creds Discard a set of credentials and unlock the current task
override_creds Override the current process's subjective credentials
revert_creds Revert a temporary subjective credentials override
__async_schedule
cpu_report_stateCalled to poll specified CPU's state, for example, when waiting for a CPU to come online.
cpu_check_up_prepareIf CPU has died properly, set its state to CPU_UP_PREPARE and return success
nr_iowait
nr_iowait_cpu
get_iowait_load
__free_domain_allocs
claim_allocationsNULL the sd_data elements we've used to build the sched_domain and sched_group structure so that the subsequent __free_domain_allocs() will not free the data we're using.
account_idle_timeAccount for idle time.
rt_overloaded
dl_overloaded
wake_atomic_t_function
__wait_on_atomic_tTo allow interruptible waiting and asynchronous(i.e. nonblocking) waiting, the actions of __wait_on_atomic_t() are permitted return codes. Nonzero return codes halt waiting and return.
cpupri_find find the best(lowest-pri) CPU in the system
ww_mutex_set_context_fastpathAfter acquiring lock with fastpath or when we lost out in contested slowpath, set ctx and wake up any waiters so they can recheck.
__mutex_lock_commonLock a mutex(possibly interruptible), slowpath:
osq_wait_nextGet a stable@node->next pointer, either for unlock() or unqueue() purposes. Can return NULL in case we were the last queued and we updated@lock instead.
queued_write_lock_slowpath acquire write lock of a queue rwlock
lock_torture_cleanup Forward reference.
hib_wait_io
crc32_threadfnCRC32 update function that runs in its own thread.
lzo_compress_threadfnCompression function that runs in its own thread.
save_image_lzo Save the suspend image data compressed with LZO.
lzo_decompress_threadfnDeompression function that runs in its own thread.
load_image_lzo Load compressed image data and decompress them with LZO.
synchronize_hardirq wait for pending hard IRQ handlers(on other CPUs)
synchronize_irq wait for pending IRQ handlers(on other CPUs)
note_interrupt
rcutorture_trace_dump
rcu_torture_stats_printPrint torture statistics. Caller must ensure that there is only
rcu_torture_barrierkthread function to drive and coordinate RCU barrier testing.
rcu_torture_cleanup
rcu_eqs_enter_common current CPU is moving towards extended quiescent state
rcu_eqs_exit_common current CPU moving away from extended quiescent state
rcu_nmi_enter inform RCU of entry to NMI context
rcu_nmi_exit inform RCU of exit from NMI context
__rcu_is_watching are RCU read-side critical sections safe?
_rcu_barrier_traceHelper function for _rcu_barrier() tracing. If tracing is disabled, the compiler is expected to optimize this away.
rcu_boot_init_percpu_dataDo boot-time initialization of a CPU's per-CPU RCU data.
rcu_init_percpu_dataInitialize a CPU's per-CPU RCU data. Note that only one online or
print_cpu_stall_infoPrint out diagnostic information for the specified stalled CPU.
show_rcubarrier
print_one_rcu_data
show_rcuexp
tstats_show
hb_waiters_pending
attach_to_pi_stateValidate that the existing waiter has a pi_state and sanity check the pi_state against the user space value. If correct, attach to it.
cgroup_destroy_root
cgroup_setup_root
cgroup_task_count count the number of tasks in a cgroup.
proc_cgroupstats_showDisplay information about each subsystem and each hierarchy
audit_log_lost conditionally log lost audit message event
audit_receive_msg
audit_tree_freeing_mark
kgdb_io_readyReturn true if there is a valid kgdb I/O module. Also if no debugger is attached a message can be printed to the console about waiting for the debugger to attach.
kgdb_reenter_check
kgdb_cpu_enter
kgdb_console_write
kgdb_schedule_breakpoint
getthread
kdb_disable_nmi
kdb_common_init_state
kdb_stub
ring_buffer_resize resize the ring buffer
ring_buffer_lock_reserve reserve a part of the buffer
ring_buffer_write write data to the buffer without reserving
ring_buffer_record_off stop all writes into the buffer
ring_buffer_record_on restart writes into the buffer
ring_buffer_record_is_on return true if the ring buffer can write
tracing_record_cmdline
function_trace_call
start_critical_timing
stop_critical_timing
ftrace_pop_return_traceRetrieve a function return address to the trace stack on thread info.
blk_dropped_read
pm_children_suspended
__perf_event_task_sched_outCalled from scheduler to remove the events of the current task, with interrupts disabled.
__perf_event_task_sched_inCalled from scheduler to add the events of the current task with interrupts disabled.
perf_mmap_closeA buffer can be mmap()ed multiple times; either directly through the same event, or through other events by use of perf_event_set_output().
perf_event_task
perf_event_comm
perf_event_mmap
__perf_event_overflowGeneric event overflow handling, sampling.
perf_event_set_output
rb_irq_work
set_page_refcountedTurn a non-refcounted page(->_count== 0) into refcounted with a count of one.
__get_page_tail_foll
get_page_follThis is meant to be called as the FOLL_GET operation of follow_page() and it must be called while holding the proper PT lock while the pte(or pmd_trans_huge) is still mapping the page.
uprobe_munmapCalled in context of a munmap of a vma.
xol_take_insn_slot- search for a free slot.
padata_do_parallel padata parallelization function
padata_reorder
padata_flush_queuesFlush all objects out of the padata queues.
oom_killer_disable disable OOM killer
free_pages_check
check_new_pageThis page is about to be returned from the page allocator
has_unmovable_pagesThis function checks whether pageblock includes unmovable pages or not. If@count is not zero, it is okay to include less@count unmovable pages
put_refcounted_compound_page
wait_iff_congested Conditionally wait for a backing_dev to become uncongested or a zone to complete writes
vmacache_flush_allFlush vma caches for threads that share a given mm.
dump_page_badflags
anon_vma_free
free_vmap_area_noflushFree a vmap area, caller ensuring that the area has been unmapped and flush_cache_vunmap had been called for the correct range previously.
swapin_nr_pages
try_to_unuseWe completely avoid races by reading each swap page in advance, and then search for the process using it. All the necessary page table adjustments can then be made atomically.
swaps_poll
swaps_open
__frontswap_curr_pages
__frontswap_unuse_pages
do_mmu_notifier_register
mmu_notifier_unregisterThis releases the mm_count pin automatically and frees the mm structure if it was the last user of it
mmu_notifier_unregister_no_releaseSame as mmu_notifier_unregister but no callback and no srcu synchronization.
ksm_test_exitksmd, and unmerge_and_remove_all_rmap_items(), must not touch an mm's page tables after it has passed through ksm_exit()- which, if necessary, takes mmap_sem briefly to serialize against them
shrink_huge_zero_page_count
__split_huge_page_refcount
khugepaged_test_exit
mem_cgroup_begin_page_stat begin a page state statistics transaction
__delete_objectMark the object as not allocated and schedule RCU freeing via put_object().
kmemleak_scanScan data sections and all the referenced memory blocks allocated via the kernel's standard allocators. This function must be called with the scan_mutex held.
zpool_unregister_driver unregister a zpool implementation.
msgctl_nolock
bio_put release a reference to a bio
bio_remaining_done
blk_queue_enter
blk_queue_resize_tags change the queueing depth
blk_mq_queue_reinitBasically redo blk_mq_init_queue with queue frozen
bt_index_atomic_inc
blk_mq_tag_wakeup_allWakeup all potentially sleeping on tags
hctx_may_queueFor shared tag users, we track the number of currently active users and attempt to provide a fair share of the tag depth for each of them.
bt_wait_ptr
bt_wake_ptr
blk_mq_tag_sysfs_show
blk_mq_hw_sysfs_active_show
part_inflight_show
blkcg_can_attachWe cannot support shared io contexts, as we have no mean to support two tasks with the same ioc in two different groups without major rework of the main cic data structures
cfq_arm_slice_timer
cfq_update_idle_windowDisable idle window if the process thinks too long or seeks so much that it doesn't matter
avc_get_hash_stats
selinux_xfrm_enabled
selinux_secmark_enabled Check to see if SECMARK is currently enabled
xfrm_state_kern
ima_rdwr_violation_check Only invalidate the PCR for measured files:- Opening a file for write when already open for read, results in a time of measure, time of use(ToMToU) error
ima_check_last_writer
de_threadThis function makes sure the current process has its own signal table, so that flush_signal_handlers can later reset the handlers without disturbing other processes
do_execveat_commonsys_execve() executes a new program.
inode_add_lruAdd inode to LRU if needed(inode is unused and clean).
evict_inodes evict all evictable inodes for a superblock
invalidate_inodes attempt to free all inodes on a superblock
inode_lru_isolateIsolate the inode from the LRU in preparation for freeing it.
__inode_dio_waitDirect i/o helper functions
inode_dio_wait wait for outstanding DIO requests to finish
expand_fdtableExpand the file descriptor table.
__fget_lightLightweight file lookup- no refcnt increment if fd table isn't shared.
dqgrab
wb_wait_for_completion wait for completion of bdi_writeback_works
writeback_single_inodeWrite out an inode's dirty pages. Either the caller has an active reference on the inode or the inode has I_WILL_FREE set.
__brelseDecrement a buffer_head's reference count. If all buffers against a page
__sync_dirty_bufferFor a data-integrity writeout, we need to wait upon any in-progress I/O and then start new I/O and then wait upon it. The caller must have a ref on the buffer_head.
buffer_busytry_to_free_buffers() checks if all the buffers on this particular page are unused, and releases them if so.
fsnotify_unmount_inodes an sb is unmounting. handle any watched inodes.
inotify_idr_find_locked
inotify_remove_from_idrRemove the mark from the idr(if present) and drop the reference on the mark because it was in the idr.
inotify_new_watch
fanotify_add_new_mark
fanotify_add_inode_mark
SYSC_fanotify_init
aio_ring_mremap
get_reqs_available
aio_read_events
check_conflicting_open see if the given dentry points to a file that has an existing open that would conflict with the desired lease.
atm_may_send
__mb_cache_entry_forget
__mb_cache_entry_release
mb_cache_shrink_scanmemory pressure callback
mb_cache_shrink_count
mb_cache_shrink Removes all cache entries of a device from the cache. All cache entries currently in use cannot be freed, and thus remain in the cache. All others are freed.
mb_cache_destroy Shrinks the cache to its minimum possible size(hopefully 0 entries), and then destroys it. If this was the last mbcache, un-registers the mbcache from kernel memory management.
mb_cache_entry_alloc Allocates a new cache entry
zap_threads
dquot_releaseRelease dquot
invalidate_dquotsInvalidate all dquots on the list. Note that this function is called after
dqputPut reference to dquot
dqgetGet reference to dquot
add_dquot_refThis routine is guarded by dqonoff_mutex mutex
task_memLogic: we've got two memory sums for each process,"shared", and"non-shared". Shared memory may get counted more than once, for each process that owns it. Non-shared memory is counted accurately.
task_sig
proc_sys_poll
kernfs_active
kernfs_drain drain kernfs_node
kernfs_get get a reference count on a kernfs_node
kernfs_put put a reference count on a kernfs_node
kernfs_activate activate a node which started deactivated
__kernfs_remove
kernfs_remove_self remove a kernfs_node from its own method
kernfs_seq_show
kernfs_file_direct_readAs reading a bin file can have side-effects, the exact offset and bytes specified in read(2) call should be passed to the read callback making it difficult to use seq_file. Implement simplistic custom buffering for bin files.
kernfs_fop_pollKernfs attribute files are pollable. The idea is that you read
configfs_get
configfs_put
configfs_d_iput
debugfs_atomic_t_get