Function report |
Source Code:include\linux\list.h |
Create Date:2022-07-28 05:34:28 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
home page | Tree |
Annotation kernel can get tool activity | Download SCCT | Chinese |
Name:list_add_tail - add a new entry*@new: new entry to be added*@head: list head to add it before* Insert a new entry before the specified head.* This is useful for implementing queues.
Proto:static inline void list_add_tail(struct list_head *new, struct list_head *head)
Type:void
Parameter:
Type | Parameter | Name |
---|---|---|
struct list_head * | new | |
struct list_head * | head |
Name | Describe |
---|---|
plist_add | plist_add - add @node to @head*@node: &struct plist_node pointer*@head: &struct plist_head pointer |
plist_requeue | plist_requeue - Requeue @node at end of same-prio entries |
uevent_net_init | |
kobj_kset_join | add the kobject to its kset's list |
add_tail | |
klist_add_before | klist_add_before - Init a klist_node and add it before an existing node*@n: node we're adding.*@pos: node to put @n after |
list_sort_test | |
register_test_dev_kmod | |
kunit_alloc_and_get_resource | |
string_stream_vadd | |
populate_error_injection_list | Lookup and populate the error_injection_list.* For safety reasons we only allow certain functions to be overridden with* bpf_error_injection, so we need to populate the list of the symbols that have* been marked as safe for overriding. |
ddebug_add_module | Allocate a new ddebug_table for the given module* and add it to the global list. |
irq_poll_sched | q_poll_sched - Schedule a run of the iopoll handler*@iop: The parent iopoll structure* Description:* Add this irq_poll structure to the pending poll list and trigger the* raise of the blk iopoll softirq. |
parman_lsort_item_add | |
parman_prio_init | parman_prio_init - initializes a parman priority chunk*@parman: parman instance*@prio: parman prio structure to be initialized*@prority: desired priority of the chunk* Note: all locking must be provided by the caller |
list_test_list_add_tail | |
list_test_list_del | |
list_test_list_replace | |
list_test_list_replace_init | |
list_test_list_swap | |
list_test_list_del_init | |
list_test_list_move | |
list_test_list_move_tail | |
list_test_list_bulk_move_tail | |
list_test_list_is_first | |
list_test_list_is_last | |
list_test_list_empty | |
list_test_list_empty_careful | |
list_test_list_rotate_left | |
list_test_list_rotate_to_front | |
list_test_list_is_singular | |
list_test_list_cut_position | |
list_test_list_cut_before | |
list_test_list_splice | |
list_test_list_splice_tail | |
list_test_list_splice_init | |
list_test_list_splice_tail_init | |
list_test_list_first_entry | |
list_test_list_last_entry | |
list_test_list_first_entry_or_null | |
list_test_list_next_entry | |
list_test_list_prev_entry | |
list_test_list_for_each | |
list_test_list_for_each_prev | |
list_test_list_for_each_safe | |
list_test_list_for_each_prev_safe | |
list_test_list_for_each_entry | |
list_test_list_for_each_entry_reverse | |
save_microcode_patch | |
update_cache | |
domain_add_cpu | domain_add_cpu - Add a cpu to a resource's domain list.* If an existing domain in the resource r's domain list matches the cpu's* resource id, add the cpu in the domain.* Otherwise, a new domain is allocated and inserted into the right position |
rdtgroup_mkdir_mon | Create a monitor group under "mon_groups" directory of a control* and monitor group(ctrl_mon). This is a resource group* to monitor a subset of tasks and cpus in its parent ctrl_mon group. |
__check_limbo | Check the RMIDs that are marked as busy for this domain. If the* reported LLC occupancy is below the threshold clear the busy bit and* decrement the count. If the busy count gets to zero on an RMID, we* free the RMID |
add_rmid_to_limbo | |
free_rmid | |
dom_data_init | |
l3_mon_evt_init | Initialize the event list for the resource.* Note that MBM events are also part of RDT_RESOURCE_L3 resource* because as per the SDM the total and local memory bandwidth* are enumerated as part of L3 monitoring. |
__add_pin_to_irq_node | The common case is 1:1 IRQ<->pin mappings. Sometimes there are* shared ISA-space IRQs, so we have to support them. We are super* fast in the common case, and fast for shared ISA-space IRQs. |
copy_process | Create a new process |
__send_signal | |
send_sigqueue | |
insert_work | sert_work - insert a work into a pool*@pwq: pwq @work belongs to*@work: work to insert*@head: insertion point*@extra_flags: extra WORK_STRUCT_* flags to set* Insert @work which belongs to @pwq after @head |
worker_attach_to_pool | worker_attach_to_pool() - attach a worker to a pool*@worker: worker to be attached*@pool: the target pool* Attach @worker to @pool. Once attached, the %WORKER_UNBOUND flag and* cpu-binding of @worker are kept coordinated with the pool across |
send_mayday | |
rescuer_thread | scuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function |
flush_workqueue | lush_workqueue - ensure that any scheduled work has run to completion.*@wq: workqueue to flush* This function sleeps until all work items which were queued on entry* have finished execution, but it is not livelocked by new incoming ones. |
workqueue_apply_unbound_cpumask | |
__kthread_create_on_node | |
kthread_insert_work | sert @work before @pos in @worker |
async_schedule_node_domain | async_schedule_node_domain - NUMA specific version of async_schedule_domain*@func: function to execute asynchronously*@data: data pointer to pass to the function*@node: NUMA node that we want to schedule this on or close to*@domain: the domain |
__enqueue_rt_entity | |
__wake_up_common | The core wakeup function |
__prepare_to_swait | |
__mutex_add_waiter | Add @waiter to a given location in the lock wait_list and set the* FLAG_WAITERS flag if it's the first waiter. |
__down_common | Because this function is inlined, the 'state' parameter will be* constant, and thus optimised away by the compiler. Likewise the* 'timeout' parameter for the cases without timeouts. |
rwsem_down_read_slowpath | Wait for the read lock to be granted |
rwsem_down_write_slowpath | Wait until we successfully acquire the write lock |
init_data_structures_once | Initialize the lock_classes[] array elements, the free_lock_classes list* and also the delayed_free structure. |
pm_qos_update_flags | pm_qos_update_flags - Update a set of PM QoS flags |
alloc_rtree_node | alloc_rtree_node - Allocate a new node and add it to the radix tree.* This function is used to allocate inner nodes as well as the* leave nodes of the radix tree. It also adds the node to the* corresponding linked list passed in by the *list parameter. |
create_mem_extents | reate_mem_extents - Create a list of memory extents.*@list: List to put the extents into.*@gfp_mask: Mask to use for memory allocations.* The extents represent contiguous ranges of PFNs. |
memory_bm_create | memory_bm_create - Allocate memory for a memory bitmap. |
__register_nosave_region | gister_nosave_region - Register a region of unsaveable memory.* Register a range of page frames the contents of which should not be saved* during hibernation (to be used in the early initialization code). |
__irq_alloc_domain_generic_chips | __irq_alloc_domain_generic_chip - Allocate generic chips for an irq domain*@d: irq domain for which to allocate chips*@irqs_per_chip: Number of interrupts each chip handles (max 32)*@num_ct: Number of irq_chip_type instances associated with this*@name: |
irq_setup_generic_chip | q_setup_generic_chip - Setup a range of interrupts with a generic chip*@gc: Generic irq chip holding all data*@msk: Bitmask holding the irqs to initialize relative to gc->irq_base*@flags: Flags for initialization*@clr: IRQ_* bits to clear*@set: IRQ_* bits |
rcu_torture_free | Free an element to the rcu_tortures pool. |
rcu_torture_init | |
klp_init_func_early | |
klp_init_object_early | |
klp_init_patch | |
hash_bucket_add | Add an entry to a hash bucket |
dma_debug_create_entries | |
collect_timerqueue | |
css_set_move_task | ss_set_move_task - move a task from one css_set to another*@task: task being moved*@from_cset: css_set @task currently belongs to (may be NULL)*@to_cset: new css_set @task is being moved to (may be NULL)*@use_mg_tasks: move to @to_cset->mg_tasks instead |
link_css_set | link_css_set - a helper function to link a css_set to a cgroup*@tmp_links: cgrp_cset_link objects allocated by allocate_cgrp_cset_links()*@cset: the css_set to be linked*@cgrp: the destination cgroup |
find_css_set | d_css_set - return a new css_set with one cgroup updated*@old_cset: the baseline css_set*@cgrp: the cgroup to be updated* Return a new css_set that's equivalent to @old_cset, but with @cgrp* substituted into the appropriate hierarchy. |
cgroup_migrate_add_task | group_migrate_add_task - add a migration target task to a migration context*@task: target task*@mgctx: target migration context* Add @task, which is a migration target, to @mgctx->tset. This function* becomes noop if @task doesn't need to be migrated |
cgroup_migrate_add_src | group_migrate_add_src - add a migration source css_set*@src_cset: the source css_set to add*@dst_cgrp: the destination cgroup*@mgctx: migration context* Tasks belonging to @src_cset are about to be migrated to @dst_cgrp |
cgroup_migrate_prepare_dst | group_migrate_prepare_dst - prepare destination css_sets for migration*@mgctx: migration context* Tasks are about to be moved and all the source css_sets have been* preloaded to @mgctx->preloaded_src_csets |
cgroup_add_cftypes | group_add_cftypes - add an array of cftypes to a subsystem*@ss: target cgroup subsystem*@cfts: zero-length name terminated array of cftypes* Register @cfts to @ss |
cgroup_init | group_init - cgroup initialization* Register cgroup filesystem and /proc file, and initialize* any subsystems that didn't request early init. |
cgroup_exit | group_exit - detach cgroup from exiting task*@tsk: pointer to task_struct of exiting process* Description: Detach cgroup from @tsk. |
get_cg_rpool_locked | |
rdmacg_register_device | dmacg_register_device - register rdmacg device to rdma controller |
__cpu_stop_queue_work | |
audit_add_rule | Add rule to given filterlist if not a duplicate. |
audit_alloc_name | |
llvm_gcov_init | |
llvm_gcda_emit_function | |
gcov_info_link | gcov_info_link - link/add profiling data set to the list*@info: profiling data set |
gcov_info_dup | gcov_info_dup - duplicate profiling data set*@info: profiling data set to duplicate* Return newly allocated duplicate on success, %NULL on error. |
kprobe_add_ksym_blacklist | |
fei_write | |
tracepoint_module_coming | |
tracing_log_err | racing_log_err - write an error to the tracing error log*@tr: The associated trace array for the error (NULL for top level array)*@loc: A string describing where the error occurred*@cmd: The tracing command that caused the error*@errs: The array of |
register_trace_event | gister_trace_event - register output for an event type*@event: the event type to register* Event types are stored in a hash and this hash is used to* find a way to print an event |
register_stat_tracer | |
hold_module_trace_bprintk_format | |
process_system_preds | |
dyn_event_register | |
trace_probe_append | |
prog_array_map_poke_track | |
xsk_map_sock_add | |
bpf_prog_offload_init | |
bpf_map_offload_map_alloc | |
__cgroup_bpf_attach | __cgroup_bpf_attach() - Attach the program to a cgroup, and* propagate the change to descendants*@cgrp: The cgroup which descendants to traverse*@prog: A program to attach*@type: Type of attach operation*@flags: Option flags |
perf_group_attach | |
perf_group_detach | |
pinned_sched_in | |
flexible_sched_in | |
perf_addr_filter_new | Allocate a new address filter |
inherit_event | Inherit an event from parent task to child task.* Returns:* - valid pointer on success* - NULL for orphaned events* - IS_ERR() on error |
toggle_bp_slot | Add/remove the given breakpoint in our constraint table |
padata_do_parallel | padata_do_parallel - padata parallelization function*@ps: padatashell*@padata: object to be parallelized*@cb_cpu: pointer to the CPU that the serialization callback function should* run on. If it's not in the serial cpumask of @pinst* (i |
padata_reorder | |
lru_add_page_tail | sed by __split_huge_page_refcount() |
register_shrinker_prepared | |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
shutdown_cache | |
isolate_freepages_block | Isolate free pages onto a private freelist. If @strict is true, will abort* returning 0 on any invalid PFNs or non-free pages inside of the pageblock* (even though it may still end up isolating some pages). |
list_lru_add | |
check_and_migrate_cma_pages | |
purge_fragmented_blocks | |
free_pcppages_bulk | Frees a number of pages from the PCP lists* Assumes all pages on list are in same zone, and of same order.* count is the number of pages to free.* If the zone was previously in an "all pages pinned" state then look to |
rmqueue_bulk | Obtain a specified number of elements from the buddy allocator, all under* a single hold of the lock, for efficiency. Add them to the supplied list.* Returns the number of new pages which were placed at *list. |
add_swap_count_continuation | add_swap_count_continuation - called when a swap count is duplicated* beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's* page of the original vmalloc'ed swap_map, to hold the continuation count |
migrate_page_add | |
__ksm_enter | |
cache_grow_end | |
get_valid_first_slab | Try to find non-pfmemalloc slab if needed |
free_block | Caller needs to acquire correct kmem_cache_node's list_lock*@list: List of detached free slabs should be freed by caller |
__add_partial | Management of partially allocated slabs. |
add_page_for_migration | Resolves the given address to a struct page, isolates it from the LRU and* puts it to the given pagelist |
deferred_split_huge_page | |
__khugepaged_enter | |
add_to_kill | Schedule a process for later kill.* Uses GFP_ATOMIC allocations to avoid potential recursions in the VM. |
update_refs | Update an object's references. object->lock must be held by the caller. |
kmemleak_scan | Scan data sections and all the referenced memory blocks allocated via the* kernel's standard allocators. This function must be called with the* scan_mutex held. |
kmemleak_test_init | Some very simple testing. This function needs to be extended for* proper testing. |
ss_add | |
do_msgsnd | |
do_msgrcv | |
unmerge_queues | merge_queues - unmerge queues, if possible.*@sma: semaphore array* The function unmerges the wait queues if complex_count is 0.* It must be called prior to dropping the global semaphore array lock. |
do_semtimedop | |
msg_insert | Auxiliary functions to manipulate messages' list |
mqueue_evict_inode | |
wq_add | Adds current to info->e_wait_q[sr] before element with smaller prio |
elv_register | |
__blk_complete_request | |
blk_mq_add_to_requeue_list | |
__blk_mq_insert_req_list | |
blk_mq_request_bypass_insert | Should only be used carefully, when the caller knows we want to* bypass a potential IO scheduler on the target device. |
blk_mq_flush_plug_list | |
blk_add_rq_to_plug | |
blk_mq_alloc_rqs | |
disk_add_events | |
ldm_ldmdb_add | ldm_ldmdb_add - Adds a raw VBLK entry to the ldmdb database*@data: Raw VBLK to add to the database*@len: Size of the raw VBLK*@ldb: Cache of the database structures* The VBLKs are sorted into categories. Partitions are also sorted by offset.* N |
ldm_frag_add | ldm_frag_add - Add a VBLK fragment to a list*@data: Raw fragment to be added to the list*@size: Size of the raw fragment*@frags: Linked list of VBLK fragments* Fragmented VBLKs may not be consecutive in the database, so they are placed |
blkcg_css_alloc | |
throtl_qnode_add_bio | hrotl_qnode_add_bio - add a bio to a throtl_qnode and activate it*@bio: bio being added*@qn: qnode to add bio to*@queued: the service_queue->queued[] list @qn belongs to* Add @bio to @qn and put @qn on @queued if it's not already on. |
dd_insert_request | add rq to rbtree and fifo |
__bfq_insert_request | rns true if it causes the idle timer to be disabled |
bfq_insert_request | |
add_suspend_info | |
key_garbage_collector | Reaper for unused keys. |
key_init | Initialise the key management state. |
keyring_publish_name | Publish the name of a keyring so that it can be found by name (if it has* one and it doesn't begin with a dot). |
tomoyo_write_log2 | moyo_write_log2 - Write an audit log.*@r: Pointer to "struct tomoyo_request_info".*@len: Buffer size needed for @fmt and @args.*@fmt: The printf()'s format string.*@args: va_list structure for @fmt.* Returns nothing. |
tomoyo_supervisor | moyo_supervisor - Ask for the supervisor's decision |
tomoyo_get_name | moyo_get_name - Allocate permanent memory for string data.*@name: The string to store into the permernent memory.* Returns pointer to "struct tomoyo_path_info" on success, NULL otherwise. |
aa_unpack | aa_unpack - unpack packed binary profile(s) data loaded from user space*@udata: user data copied to kmem (NOT NULL)*@lh: list to place unpacked profiles in a aa_repl_ws*@ns: Returns namespace profile is in if specified else NULL (NOT NULL)* Unpack user |
dev_exceptions_copy | alled under devcgroup_mutex |
add_rules | |
ima_parse_add_rule | ma_parse_add_rule - add a rule to ima_policy_rules*@rule - ima measurement policy rule* Avoid locking by allowing just one writer at a time in ima_write_policy()* Returns the length of the rule parsed, an error code on failure |
evm_init_config | |
sget_fc | sget_fc - Find or create a superblock*@fc: Filesystem context |
sget | find or create a superblock |
__register_binfmt | |
__attach_mnt | |
commit_tree | vfsmount lock must be held for write |
vfs_create_mount | vfs_create_mount - Create a mount for a configured superblock*@fc: The configuration context with the superblock attached* Create a mount to an already configured superblock. If necessary, the* caller should invoke vfs_get_tree() before calling this. |
clone_mnt | |
umount_tree | mount_lock must be held* namespace_sem must be held for write |
copy_tree | |
open_detached_copy | |
mnt_set_expiry | mnt_set_expiry - Put a mount on an expiration list*@mnt: The mount to list.*@expiry_list: The list to add the mount to. |
copy_mnt_ns | |
wb_queue_work | |
sb_mark_inode_writeback | mark an inode as under writeback on the sb |
propagate_umount | llect all mounts that receive propagation from the mount in @list,* and return these additional mounts in the same list.*@list: the list of mounts to be unmounted.* vfsmount lock must be held for write |
fsnotify_add_event | Add an event to the group notification queue |
fanotify_read | |
ep_ptable_queue_proc | |
ep_insert | Must be called with "mtx" held. |
ep_modify | Modify the interest event mask by dropping an event if the new mask* has a match in the current file status. Must be called with "mtx" held. |
ep_send_events_proc | |
dup_userfaultfd | |
userfaultfd_unmap_prep | |
kiocb_set_cancel_fn | |
aio_poll | |
io_cqring_fill_event | |
io_iopoll_req_issued | After the iocb has been issued, it's safe to be found on the poll list.* Adding the kiocb to the list AFTER submission ensures that we don't* find it from a io_iopoll_getevents() thread before the issuer is done* accessing the kiocb cookie. |
io_req_defer | |
io_submit_sqe | |
__locks_insert_block | Insert waiter into blocker's block list.* We use a circular list so that processes can be easily woken up in* the order they blocked. The documentation doesn't require this but* it seems like the reasonable thing to do. |
locks_insert_lock_ctx | |
mb_cache_entry_create | mb_cache_entry_create - create entry in cache*@cache - cache where the entry should be created*@mask - gfp mask with which the entry should be allocated*@key - key of the entry*@value - value of the entry*@reusable - is the entry reusable by others? |
put_dquot_last | Add a dquot to the tail of the free list |
put_inuse | |
list_move_tail | list_move_tail - delete from one list and add as another's tail*@list: the entry to move*@head: the head that will follow our entry |
__add_wait_queue_entry_tail |
Source code conversion tool public plug-in interface | X |
---|---|
Support c/c++/esqlc/java Oracle/Informix/Mysql Plug-in can realize: logical Report Code generation and batch code conversion |