Function Report

Linux Kernel (v4.4)

Source File:include\linux\list.h Create Date:2016-01-14 09:03:28
Last Modify:2016-01-11 07:01:32 Copyright©Brick
home page Tree
Annotate the kernelChinese

Function Name:list_add_tail

Function:static inline void list_add_tail(struct list_head *new, struct list_head *head)

Return Type:static inline void

Parameter:

Type Parameter NameRemarks
struct list_head * new new entry to be added
struct list_head * head list head to add it before

Function description: add a new entry

77  Insert a new entry between two known consecutive entries.
Caller
Function NameFunction description
list_move_tail delete from one list and add as another's tail
__add_wait_queue_tail
plist_add add@node to@head
plist_requeue Requeue@node at end of same-prio entries.
uevent_net_init
kobj_kset_joinadd the kobject to its kset's list
add_tail
klist_add_before Init a klist_node and add it before an existing node
resource_list_add_tail
ddebug_add_moduleAllocate a new ddebug_table for the given module and add it to the global list.
hash_bucket_addAdd an entry to a hash bucket
dma_debug_resize_entries
prealloc_memoryDMA-API debugging init code
alternatives_smp_module_add
__rapl_pmu_event_start
__find_pci2phy_map
uncore_pci_probeadd a pci uncore device
snb_uncore_imc_event_start
__add_pin_to_irq_nodeThe common case is 1:1 IRQ<->pin mappings. Sometimes there are shared ISA-space IRQs, so we have to support them. We are super fast in the common case, and fast for shared ISA-space IRQs.
copy_processCreate a new process
__send_signal
send_sigqueue
insert_work insert a work into a pool
worker_attach_to_pool attach a worker to a pool
send_mayday
flush_workqueue ensure that any scheduled work has run to completion.
workqueue_apply_unbound_cpumask
kthread_create_on_node create a kthread.
insert_kthread_workinsert@work before@pos in@worker
__async_schedule
__enqueue_rt_entity
__mutex_lock_commonLock a mutex(possibly interruptible), slowpath:
__down_commonBecause this function is inlined, the'state' parameter will be constant, and thus optimised away by the compiler. Likewise the'timeout' parameter for the cases without timeouts.
__down_readget a read lock on the semaphore
__down_write_nestedget a write lock on the semaphore
rwsem_down_read_failedWait for the read lock to be granted
rwsem_down_write_failedWait until we successfully acquire the write lock
pm_qos_update_flags Update a set of PM QoS flags.
alloc_rtree_node Allocate a new node and add it to the radix tree.
create_mem_extents create a list of memory extents representing contiguous ranges of PFNs
memory_bm_create allocate memory for a memory bitmap
__register_nosave_regionregister_nosave_region- register a range of page frames the contents of which should not be saved during the suspend(to be used in the early initialization code)
irq_alloc_domain_generic_chipsirq_alloc_domain_generic_chip- Allocate generic chips for an irq domain
irq_setup_generic_chip Setup a range of interrupts with a generic chip
rcu_torture_freeFree an element to the rcu_tortures pool.
rcu_torture_init
rcu_preempt_ctxt_queueQueues a task preempted within an RCU-preempt read-side critical section into the appropriate location within the->blkd_tasks list, depending on the states of any ongoing normal and expedited grace periods
klp_init_patch
css_set_move_task move a task from one css_set to another
link_css_set a helper function to link a css_set to a cgroup
find_css_set return a new css_set with one cgroup updated
cgroup_enable_task_cg_lists
cgroup_taskset_add try to add a migration target task to a taskset
cgroup_add_cftypes add an array of cftypes to a subsystem
cgroup_init cgroup initialization
__cpu_stop_queue_work
audit_add_ruleAdd rule to given filterlist if not a duplicate.
audit_alloc_name
populate_kprobe_blacklistLookup and populate the kprobe_blacklist.
tracepoint_module_coming
register_trace_event register output for an event type
register_stat_tracer
hold_module_trace_bprintk_format
postfix_append_operand
postfix_append_op
replace_system_preds
register_trace_kprobeRegister a trace_probe and probe_event
register_trace_uprobeRegister a trace_uprobe and probe_event
list_add_eventAdd a event from the lists for its context. Must be called with ctx->mutex and ctx->lock held.
perf_group_attach
SYSC_perf_event_open
inherit_eventinherit a event from parent task to child task:
toggle_bp_slotAdd/remove the given breakpoint in our constraint table
padata_do_parallel padata parallelization function
padata_reorder
padata_do_serial padata serialization function
__free_one_pageFreeing function for a buddy system allocator.
rmqueue_bulkObtain a specified number of elements from the buddy allocator, all under a single hold of the lock, for efficiency. Add them to the supplied list. Returns the number of new pages which were placed at*list.
free_hot_cold_pageFree a 0-order page cold== true? free a cold page: free a hot page
lru_add_page_tailused by __split_huge_page_refcount()
register_shrinkerAdd a shrinker callback to be called from the vm.
shrink_page_listreturns the number of reclaimed pages
list_lru_add
__purge_vmap_area_lazyPurges all lazily-freed vmap areas.
purge_fragmented_blocks
add_swap_extentAdd a block range(and the corresponding page range) into this swapdev's extent list. The extent list is kept sorted in page order.
add_swap_count_continuation called when a swap count is duplicated
migrate_page_addpage migration
__ksm_enter
cache_growGrow(by 1) the number of slabs within a cache. This is called by kmem_cache_alloc() when there are no active objs left in a cache.
free_blockCaller needs to acquire correct kmem_cache_node's list_lock
__add_partialManagement of partially allocated slabs.
do_move_page_to_node_arrayMove a set of pages as indicated in the pm array. The addr field must be set to the virtual address of the page to be moved and the node number must contain a valid target node. The pm array ends with node= MAX_NUMNODES.
__khugepaged_enter
add_to_killSchedule a process for later kill. Uses GFP_ATOMIC allocations to avoid potential recursions in the VM. TBD would GFP_NOIO be enough?
scan_blockScan a memory block(exclusive range) for valid pointers and add those found to the gray list.
kmemleak_scanScan data sections and all the referenced memory blocks allocated via the kernel's standard allocators. This function must be called with the scan_mutex held.
kmemleak_test_initSome very simple testing. This function needs to be extended for proper testing.
insert_zspageEach size class maintains various freelists and zspages are assigned to one of these freelists based on the number of live objects they have. This functions inserts the given zspage into the freelist identified by.
ss_add
do_msgsnd
do_msgrcv
unmerge_queues unmerge queues, if possible.
wake_up_sem_queue_prepare Prepare wake-up
SYSC_semtimedop
msg_insertAuxiliary functions to manipulate messages' list
wq_addAdds current to info->e_wait_q[sr] before element with smaller prio
elv_dispatch_add_tailInsert rq into dispatch queue of q. Queue lock must be held on entry. rq is added to the back of the dispatch queue. To be used by specific elevators.
__elv_add_request
elv_register
blk_queue_bio
blk_flush_queue_rq
blk_insert_flush insert a new FLUSH/FUA request
trigger_softirq
__blk_complete_request
blk_add_timer Start timeout timer for a single request
blk_iopoll_sched Schedule a run of the iopoll handler
blk_mq_add_to_requeue_list
__blk_mq_insert_req_list
blk_mq_flush_plug_list
blk_mq_make_requestMultiple hardware queue variant. This will not use per-process plugs, but will attempt to bypass the hctx queueing if we can go straight to hardware for SYNC IO.
blk_sq_make_requestSingle hardware queue variant. This will attempt to use any per-process plug for merging and IO deferral.
blk_mq_init_rq_map
blk_mq_add_queue_tag_set
blk_mq_init_allocated_queue
blk_mq_register_cpu_notifier
disk_add_events
ldm_ldmdb_add Adds a raw VBLK entry to the ldmdb database
ldm_frag_add Add a VBLK fragment to a list
bsg_add_commanddo final setup of a'bc' and submit the matching'rq' to the block layer for io
blkcg_css_alloc
throtl_qnode_add_bio add a bio to a throtl_qnode and activate it
noop_add_request
deadline_add_requestadd rq to rbtree and fifo
cfq_insert_request
aa_unpack unpack packed binary profile(s) data loaded from user space
ima_init_policy initialize the default measure rules.
ima_parse_add_rule add a rule to ima_policy_rules
sget find or create a superblock
__register_binfmt
attach_mntvfsmount lock must be held for write
attach_shadowed
commit_treevfsmount lock must be held for write
vfs_kern_mount
clone_mnt
umount_treemount_lock must be held namespace_sem must be held for write
copy_tree
mnt_set_expiry Put a mount on an expiration list
copy_mnt_ns
dcache_dir_lseek
wb_queue_work
fsnotify_add_eventAdd an event to the group notification queue. The group can later pull this
ep_scan_ready_list Scans the ready list in a way that makes possible for the scan code, to call f_op->poll(). Also allows for O(NumReady) performance.
ep_poll_callbackThis is the callback that is passed to the wait queue wakeup mechanism. It is called by the stored file descriptors when they have events to report.
ep_ptable_queue_procThis is the callback that is used to add our wait queue to the target file wakeup lists.
ep_insertMust be called with"mtx" held.
ep_modifyModify the interest event mask by dropping an event if the new mask has a match in the current file status. Must be called with"mtx" held.
ep_send_events_proc
__locks_insert_blockInsert waiter into blocker's block list. We use a circular list so that processes can be easily woken up in the order they blocked. The documentation doesn't require this but it seems like the reasonable thing to do.
locks_insert_lock_ctx
__mb_cache_entry_release
mb_cache_shrink_scanmemory pressure callback
mb_cache_shrink Removes all cache entries of a device from the cache. All cache entries currently in use cannot be freed, and thus remain in the cache. All others are freed.
put_dquot_lastAdd a dquot to the tail of the free list
put_inuse
kclist_add
get_sparsemem_vmemmap_infocalculate vmemmap's address from given system ram pfn and register it
kclist_add_private
process_ptload_program_headers_elf64Add memory chunks represented by program headers to vmcore list. Also update the new offset fields of exported program headers.
process_ptload_program_headers_elf32
kernfs_get_open_node get or create kernfs_open_node
link_obj
configfs_dir_lseek