Function Report

Linux Kernel (v4.4)

Source File:arch\x86\include\asm\atomic.h Create Date:2016-01-14 09:04:53
Last Modify:2016-01-11 07:01:32 Copyright©Brick
home page Tree
Annotate the kernelChinese

Function Name:atomic_inc

Function:static inline __attribute__((always_inline)) void atomic_inc(atomic_t *v)

Return Type:static inline __attribute__((always_inline)) void

Parameter:

Type Parameter NameRemarks
atomic_t * v pointer of type atomic_t

Function description: increment atomic variable

93  asm volatile(LOCK_PREFIX "incl %0" : "+m" (v->counter));
Caller
Function NameFunction description
atomic_long_inc
static_key_slow_inc
__key_get
get_huge_page_tail
get_page
get_pid
mapping_allow_writable
allow_write_access
i_readcount_inc
inode_dio_begin signal start of a direct I/O requests
get_group_info Get a reference to a group info structure
get_new_cred Get a reference on a new set of credentials
get_uid
get_nsproxy
pause_graph_tracing
skb_get reference buffer
nf_conntrack_get
tasklet_disable_nosync
get_net
rt_genid_bump_ipv4
fnhe_genid_bump
neigh_parms_clone
neigh_clone
dst_hold
dst_clone
sock_holdGrab socket reference count. This operation is valid only when sk is ALREADY grabbed f.e. it is found in hash table or a list and the lookup is made under lock preventing hash table modifications.
reqsk_queue_added
in6_dev_get get inet6_dev pointer from netdevice
in6_dev_hold
in6_ifa_hold
page_cache_get_speculativespeculatively take a reference to a page. If the page is free(_count== 0), then _count is untouched, and 0 is returned. Otherwise, _count is incremented by 1 and 1 is returned.
__rhashtable_insert_fastInternal function, please use rhashtable_insert_fast() instead
rhashtable_insert_slow
fscache_get_retrieval Get an extra reference on a retrieval operation
__fscache_use_cookie
fib_rule_get
add_template
add_repeat_template
add_short_data_template
add_zeros_template
add_end_template
do_op
sw842_decompress Decompress the 842-compressed buffer of length@ilen at@in to the output buffer@out, using no more than@olen bytes
part_inc_in_flight
get_io_context_active get active reference on ioc
ioc_task_link
bio_getget a reference to a bio, so it won't disappear. the intended use is something like:
blkg_get get a blkg reference
wb_congested_get_create
mask_and_ack_8259ACareful! The 8259A is a fragile beast, it pretty much _has_ to be done exactly like this(mask it first, _then_ send the EOI, and the order of EOI to the two 8259s is important!
get_user_ns
tboot_cpu_callback
cpu_initinitializes state that is per-CPU. Some data is already
x86_reserve_hardware
x86_add_exclusiveCheck if we can create event of a certain type(that no conflicting events are present).
__x86_pmu_event_initSetup the hardware configuration for a given attr_type
__intel_shared_reg_get_constraintsmanage allocation of shared extra msr for certain events
uncore_pmu_to_box
uncore_get_constraintgeneric get constraint function for shared match/mask registers.
uncore_cpu_starting
nhmex_mbox_get_shared_reg
nhmex_rbox_get_constraintEach rbox has 4 event set which monitor PQI port 0~3 or 4~7. An event set consists of 6 events, the 3rd and 4th events in an event set use the same extra register. So an event set uses 5 extra registers.
mce_endSynchronize between CPUs after main scanning loop. This invokes the bulk of the Monarch processing.
threshold_create_bank
check_tsc_sync_sourceSource CPU calls into this- it waits for the freshly booted target CPU to arrive and then starts the measurement:
check_tsc_sync_targetFreshly booted CPUs call into this:
__smp_error_interruptThis interrupt should never happen with our APIC/SMP architecture
ioapic_ack_level
update_ftrace_func
arch_ftrace_update_code
get_anon_vma
page_dup_rmap
mpol_get
get_bh
get_rpccred
posix_acl_dupDuplicate an ACL handle.
dup_mmap
get_task_mm acquire a reference to the task's mm
copy_mm
copy_files
copy_sighand
copy_processCreate a new process
exit_mmTurn us into a lazy TLB process if we aren't already..
uid_hash_find
__sigqueue_allocallocate a new signal queue record- this may be called without locks if and only if t== current, otherwise an appropriate lock must be held to stop the target task from exiting
__request_module try to load a kernel module
helper_lock
wq_worker_waking_up a worker is waking up
worker_clr_flags clear worker flags and adjust nr_running accordingly
flush_workqueue_prep_pwqs prepare pwqs for workqueue flushing
free_pidmap
get_ipc_ns
copy_credsCopy a certificate
commit_creds Install new credentials upon the current task
__async_schedule
context_switch switch to the new MM and the new thread's register state.
io_schedule_timeoutThis task is about to go to sleep on IO. Increment rq->nr_iowait so that process accounting knows that this is a task in IO wait state.
rq_attach_root
build_sched_groupswill build a circular linked list of the groups covered by the given span, and will set each group's->cpumask correctly, and->cpu_capacity to 0.
sched_init
rt_set_overload
dl_set_overload
cpupri_set update the cpu priority setting
__lock_acquireThis gets called for every mutex_lock*()/spin_lock*() operation. We maintain the dependency maps and validate the locking attempt:
__torture_print_statsCreate an lock-torture-statistics message in the specified buffer.
freeze_processes Signal user space processes to enter the refrigerator. The current thread will not be frozen. The same process that calls freeze_processes must later call thaw_processes.
hibernate Carry out system hibernation, including saving the image.
software_resume Resume from a saved hibernation image.
hib_submit_io
snapshot_open
snapshot_release
__irq_wake_thread
irq_threadInterrupt handler thread
rcu_torture_allocAllocate an element from the rcu_tortures pool.
rcu_torture_freeFree an element to the rcu_tortures pool.
rcu_torture_pipe_update_oneUpdate callback in the pipe. This should be invoked after a grace period.
rcu_torture_writerRCU torture writer kthread. Repeatedly substitutes a new structure for that pointed to by rcu_torture_current, freeing the old structure after a series of grace periods(the"pipeline").
rcu_torture_timerRCU torture reader from timer handler. Dereferences rcu_torture_current, incrementing the corresponding element of the pipeline array. The counter in the element should never be greater than 1, otherwise, the RCU implementation is broken.
rcu_torture_readerRCU torture reader kthread. Repeatedly dereferences rcu_torture_current, incrementing the corresponding element of the pipeline array. The counter in the element should never be greater than 1, otherwise, the RCU implementation is broken.
rcu_torture_stats_printPrint torture statistics. Caller must ensure that there is only
rcu_torture_barrier_cbfCallback function for RCU barrier testing.
rcu_eqs_enter_common current CPU is moving towards extended quiescent state
rcu_eqs_exit_common current CPU moving away from extended quiescent state
rcu_nmi_enter inform RCU of entry to NMI context
rcu_nmi_exit inform RCU of exit from NMI context
rcu_barrier_funcCalled with preemption disabled, and from cross-cpu IRQ context.
_rcu_barrierOrchestrate the specified type of RCU barrier, waiting for all RCU callbacks of the specified type to complete.
timer_stats_update_stats Update the statistics for a timer.
futex_get_mm
hb_waiters_incReflects a new waiter being added to the waitqueue.
attach_to_pi_stateValidate that the existing waiter has a pi_state and sanity check the pi_state against the user space value. If correct, attach to it.
futex_requeue Requeue waiters from uaddr1 to uaddr2
get_css_setrefcounted get/put for css_set objects
cgroup_mkdir
freezer_css_online commit creation of a freezer css
freezer_apply_state apply state change to a single cgroup_freezer
audit_log_lost conditionally log lost audit message event
audit_get_watch
get_tree
kgdb_cpu_enter
kgdb_schedule_breakpoint
kgdb_breakpoint generate breakpoint exception
vkdb_printf
rb_remove_pages
ring_buffer_resize resize the ring buffer
ring_buffer_record_disable stop all writes into the buffer
ring_buffer_record_disable_cpu stop all writes into the cpu_buffer
rb_reader_lock
ring_buffer_read_prepare Prepare for a non consuming read of the buffer
ring_buffer_reset_cpu reset a ring buffer per CPU buffer
s_startThe current tracer is copied to avoid a global locking all around.
tracing_cpumask_write
ftrace_dump
start_critical_timing
stop_critical_timing
__trace_mmiotrace_rw
__trace_mmiotrace_map
ftrace_push_return_traceAdd a function return address to the trace stack on thread info.
blk_subbuf_start_callbackKeep track of how many times we encountered a full subbuffer, to aid the user space app in telling how many lost events there were.
pm_runtime_get_noresume
ftrace_dump_buf
bpf_map_inc
bpf_prog_getcalled by sockets/tracing/seccomp before attaching program to an event pairs with bpf_prog_put()
exclusive_event_destroy
perf_mmap_open
perf_mmap
account_event_cpu
account_event
__get_page_tail_foll
get_page_follThis is meant to be called as the FOLL_GET operation of follow_page() and it must be called while holding the proper PT lock while the pte(or pmd_trans_huge) is still mapping the page.
get_uprobe
xol_take_insn_slot- search for a free slot.
padata_do_parallel padata parallelization function
padata_do_serial padata serialization function
mark_oom_victim mark the given task as OOM victim
oom_kill_processMust be called while holding a reference to p, which will be released upon returning.
set_wb_congested
use_mm Makes the calling kernel thread take on the specified mm context.(Note: this routine is intended to be called only from a kernel thread context)
__remove_shared_vm_structRequires inode->i_mapping->i_mmap_rwsem
__vma_link_file
lookup_swap_cacheLookup a swap entry in the swap cache. A found page will be returned unlocked and with its refcount incremented- we rely on the kernel lock getting page table operations atomic even if we drop the page lock before returning.
try_to_unuseWe completely avoid races by reading each swap page in advance, and then search for the process using it. All the necessary page table adjustments can then be made atomically.
SYSC_swapoff
SYSC_swapon
__frontswap_set
zswap_frontswap_storeattempts to compress and store an single page
do_mmu_notifier_register
__ksm_enter
__khugepaged_enter
mem_cgroup_move_charge
zpool_get_driverthis assumes@type is null-terminated.
do_msgsnd
copy_semundoIf CLONE_SYSVSEM is set, establish sharing of SEM_UNDO state between parent and child tasks.
create_ipc_ns
bio_inc_remainingIncrement chain count for the bio. Make sure the CHAIN flag update is visible before the raised count.
blk_queue_init_tags initialize the queue tag info
blkdev_issue_discard queue a discard
blkdev_issue_write_same queue a write same operation
__blkdev_issue_zerooutblkdev_issue_zeroout- generate number of zero filed write bios
__blk_mq_alloc_request
__blk_mq_tag_busyIf a previously inactive queue goes active, bump the active user count.
__bsg_get_device
selinux_xfrm_notify_policyload
selinux_secmark_refcount_inc
xfrm_pol_hold
xfrm_state_hold
secpath_get
selinux_xfrm_alloc_userAllocates a xfrm_sec_state and populates it using the supplied security xfrm_user_sec_ctx context.
selinux_xfrm_policy_cloneLSM hook implementation that copies security data structure from old to new for policy cloning.
selinux_xfrm_state_alloc_acquireLSM hook implementation that allocates a xfrm_sec_state and populates based on a secid.
freeze_super lock the filesystem and force it into a consistent state
get_mnt_ns
copy_name
__igetinode->i_lock must be held
iput put an inode
get_files_struct
clone_mnt
mount_subtree
dqgrab
wb_queue_work
pde_get
fsnotify_get_groupGet reference to a group.
fsnotify_get_mark
fsnotify_add_mark_lockedAttach an initialized mark to a given group and fs object. These marks may be used for the fsnotify backend to determine which event types should be delivered to which group.
inotify_new_watch
SYSC_fanotify_init
userfaultfd_file_create Creates an userfaultfd file pointer.
rfcomm_dlc_hold
atm_dev_hold
mb_cache_entry_alloc Allocates a new cache entry
mb_cache_entry_get Get a cache entry by device/ block number.(There can only be one entry in the cache per device and block.) Returns NULL if no such cache entry exists. The returned cache entry is locked for exclusive access("single writer").
__mb_cache_entry_find
dquot_scan_activeCall callback for every active dquot on given filesystem
dqgetGet reference to dquot
proc_mem_open
proc_sys_poll_notify
kernfs_get get a reference count on a kernfs_node
kernfs_unbreak_active_protection undo kernfs_break_active_protection()
kernfs_get_open_node get or create kernfs_open_node
kernfs_unmap_bin_file
kernfs_notify_workfn
configfs_get