函数逻辑报告 |
Source Code:include\linux\spinlock.h |
Create Date:2022-07-27 06:39:17 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:加自旋锁
函数原型:static __always_inline void spin_lock(spinlock_t *lock)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
spinlock_t * | lock |
338 | raw_spin_lock( & rlock) |
名称 | 描述 |
---|---|
_atomic_dec_and_lock | This is an implementation of the notion of "decrement a* reference count, and return locked if it decremented to zero" |
kobj_kset_join | add the kobject to its kset's list |
kobj_kset_leave | move the kobject from its kset's list |
kset_find_obj | kset_find_obj() - Search for object in kset.*@kset: kset we're looking in.*@name: object's name.* Lock kset via @kset->subsys, and iterate over @kset->list,* looking for a matching kobject. If matching object is found |
kobj_ns_type_register | |
kobj_ns_type_registered | |
kobj_ns_current_may_mount | |
kobj_ns_grab_current | |
kobj_ns_netlink | |
kobj_ns_initial | |
kobj_ns_drop | |
add_head | |
add_tail | |
klist_add_behind | 当前节点后添加并初始化klist_node |
klist_add_before | 当前节点前添加并初始化klist_node |
klist_release | |
klist_put | |
klist_remove | 减少节点的引用计数并等待它移除 |
lockref_get | 无条件增量引用计数 |
lockref_get_not_zero | 除非计数为0或死亡 |
lockref_put_not_zero | lockref_put_not_zero - Decrements count unless count <= 1 before decrement*@lockref: pointer to lockref structure* Return: 1 if count updated successfully or 0 if count would become zero |
lockref_get_or_lock | 除非计数为0或死亡 |
lockref_put_or_lock | 计数大于0则递减 |
lockref_get_not_dead | 有引用则递增计数 |
rhashtable_rehash_table | |
rhashtable_walk_enter | hashtable_walk_enter - Initialise an iterator*@ht: Table to walk over*@iter: Hash table Iterator* This function prepares a hash table walk.* Note that if you restart a walk after rhashtable_walk_stop you* may see the same object twice |
rhashtable_walk_exit | 释放迭代器 |
rhashtable_walk_start_check | hashtable_walk_start_check - Start a hash table walk*@iter: Hash table iterator* Start a hash table walk at the current iterator position. Note that we take* the RCU lock in all cases including when we return an error. So you must |
rhashtable_walk_stop | 哈希表作用完成 |
refcount_dec_and_lock | _dec_and_lock - return holding spinlock if able to decrement* refcount to 0*@r: the refcount*@lock: the spinlock to be locked* Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to* decrement when saturated at REFCOUNT_SATURATED |
kunit_alloc_and_get_resource | |
kunit_resource_remove | |
kunit_cleanup | |
string_stream_vadd | |
string_stream_clear | |
string_stream_get_string | |
gen_pool_add_owner | gen_pool_add_owner- add a new chunk of special memory to the pool*@pool: pool to add new memory chunk to*@virt: virtual starting address of memory chunk to add to pool*@phys: physical starting address of memory chunk to add to pool*@size: size in bytes of |
textsearch_register | xtsearch_register - register a textsearch module*@ops: operations lookup table* This function must be called by textsearch modules to announce* their presence |
textsearch_unregister | xtsearch_unregister - unregister a textsearch module*@ops: operations lookup table* This function must be called by textsearch modules to announce* their disappearance for examples when the module gets unloaded |
machine_real_restart | |
queue_event | |
suspend | |
do_release | |
do_open | |
__mmput | |
copy_fs | 复制文件系统 |
copy_process | 创建进程 |
ksys_unshare | share allows a process to 'unshare' part of the process* context which was originally shared using clone. copy_** functions used by do_fork() cannot be used here directly* because they modify an inactive task_struct that is being* constructed |
do_oops_enter_exit | It just happens that oops_enter() and oops_exit() are identically* implemented... |
__exit_signal | This function expects the tasklist_lock write-locked. |
free_resource | |
alloc_resource | |
__ptrace_unlink | __ptrace_unlink - unlink ptracee and restore its execution state*@child: ptracee to be unlinked* Remove @child from the ptrace list, move it back to the original parent,* and restore the execution state so that it conforms to the group stop* state |
ptrace_attach | |
ignoring_children | Called with irqs disabled, returns true if childs should reap themselves. |
prctl_set_mm | |
call_usermodehelper_exec_async | This is the task which runs the usermode application |
proc_cap_handler | |
try_to_grab_pending | ry_to_grab_pending - steal work item from worklist and disable irq*@work: work item to steal*@is_dwork: @work is a delayed_work*@flags: place to store irq state* Try to grab PENDING bit of @work. This function can handle @work in any |
__queue_work | |
pool_mayday_timeout | |
rescuer_thread | scuer_thread - the rescuer thread function*@__rescuer: self* Workqueue rescuer thread function |
kmalloc_parameter | |
maybe_kfree_parameter | Does nothing if parameter wasn't kmalloced above. |
__kthread_create_on_node | |
kthreadd | |
__cond_resched_lock | __cond_resched_lock() - if a reschedule is pending, drop the given lock,* call schedule, and on return reacquire the lock |
do_wait_intr | Note! These two wait functions are entered with the* case), so there is no race with testing the wakeup* condition in the caller before they add the wait* entry to the wake queue. |
ww_mutex_set_context_fastpath | After acquiring lock with fastpath, where we do not hold wait_lock, set ctx* and wake up any waiters so they can recheck. |
__mutex_lock_common | Lock a mutex (possibly interruptible), slowpath: |
__mutex_unlock_slowpath | |
torture_spin_lock_write_lock | |
kcmp_epoll_target | |
posix_timer_add | |
SYSCALL_DEFINE1 | Delete a POSIX.1b interval timer. |
run_posix_cpu_timers | This is called from the timer interrupt handler. The irq handler has* already updated our counts. We need to check if any timers fire now.* Interrupts are disabled. |
double_lock_hb | Express the locking dependencies for lockdep: |
futex_wake | Wake up waiters matching bitset queued on this futex (uaddr). |
queue_lock | The key must be already stored in q->key. |
unqueue_me | queue_me() - Remove the futex_q from its futex_hash_bucket*@q: The futex_q to unqueue* The q->lock_ptr must not be held by the caller |
fixup_pi_state_owner | |
futex_lock_pi | Userspace tried a 0 -> TID atomic transition of the futex value* and failed. The kernel side here does the whole locking operation:* if there are waiters then it will block as a consequence of relying* on rt-mutexes, it does PI, etc |
futex_unlock_pi | Userspace attempted a TID -> 0 atomic transition, and failed.* This is the in-kernel slowpath: we look up the PI state (if any),* and do the rt-mutex unlock. |
futex_wait_requeue_pi | ex_wait_requeue_pi() - Wait on uaddr and take uaddr2*@uaddr: the futex we initially wait on (non-pi)*@flags: futex flags (FLAGS_SHARED, FLAGS_CLOCKRT, etc |
cgroup_post_fork | group_post_fork - called on a new task after adding it to the task list*@child: the task in question* Adds the task to the list running through its css_set if necessary and* call the subsystem fork() callbacks |
cgroup_release_agent_write | |
cgroup_release_agent_show | |
cgroup1_show_options | |
cgroup1_reconfigure | |
cgroup_leave_frozen | Conditionally leave frozen/stopped state |
fmeter_markevent | Process any previous ticks, then bump cnt by one (times scale). |
fmeter_getrate | Process any previous ticks, then return current value. |
untag_chunk | |
create_chunk | Call with group->mark_mutex held, releases it |
tag_chunk | he first tagged inode becomes root of tree |
prune_tree_chunks | Remove tree from chunks. If 'tagged' is set, remove tree only from tagged* chunks. The function expects tagged chunks are all at the beginning of the* chunks list. |
trim_marked | rim the uncommitted chunks from tree |
audit_remove_tree_rule | alled with audit_filter_mutex |
audit_trim_trees | |
audit_add_tree_rule | alled with audit_filter_mutex |
audit_tag_tree | |
evict_chunk | Here comes the stuff asynchronous to auditctl operations |
audit_tree_freeing_mark | |
kcov_remote_reset | |
kcov_task_exit | |
kcov_mmap | |
kcov_ioctl_locked | |
kcov_ioctl | |
kcov_remote_start | kcov_remote_start() and kcov_remote_stop() can be used to annotate a section* of code in a kernel background thread to allow kcov to be used to collect* coverage from that part of code |
kcov_remote_stop | See the comment before kcov_remote_start() for usage details. |
kgdb_register_io_module | kgdb_register_io_module - register KGDB IO module*@new_dbg_io_ops: the io ops vector* Register it with the KGDB core. |
kgdb_unregister_io_module | kkgdb_unregister_io_module - unregister KGDB IO module*@old_dbg_io_ops: the io ops vector* Unregister it with the KGDB core. |
remove_event_file_dir | |
bpf_task_fd_query | |
dev_map_alloc | |
dev_map_free | |
bq_flush_to_queue | |
find_uprobe | Find a uprobe corresponding to a given inode:offset* Acquires uprobes_treelock |
insert_uprobe | Acquire uprobes_treelock.* Matching uprobe already exists in rbtree;* increment (access refcount) and return the matching uprobe.* No matching uprobe; insert the uprobe in rb_tree;* get a double refcount (access + creation) and return NULL. |
delete_uprobe | There could be threads that have already hit the breakpoint. They* will recheck the current insn and restart if find_uprobe() fails.* See find_active_uprobe(). |
build_probe_list | For a given range in vma, build a list of probes that need to be inserted. |
vma_has_uprobes | |
padata_parallel_worker | |
padata_do_parallel | padata_do_parallel - padata parallelization function*@ps: padatashell*@padata: object to be parallelized*@cb_cpu: pointer to the CPU that the serialization callback function should* run on. If it's not in the serial cpumask of @pinst* (i |
padata_find_next | padata_find_next - Find the next object that needs serialization |
padata_reorder | |
padata_serial_worker | |
padata_do_serial | padata_do_serial - padata serialization function*@padata: object to be serialized.* padata_do_serial must be called for every parallelized object.* The serialization callback function will run with BHs off. |
file_check_and_advance_wb_err | le_check_and_advance_wb_err - report wb error (if any) that was previously* and advance wb_err to current one*@file: struct file on which the error is being reported* When userland calls fsync (or something like nfsd does the equivalent), we* want to |
oom_reaper | |
wake_oom_reaper | |
generic_fadvise | POSIX_FADV_WILLNEED could set PG_Referenced, and POSIX_FADV_NOREUSE could* deactivate the pages and clear PG_Referenced. |
domain_update_bandwidth | |
balance_dirty_pages | alance_dirty_pages() must be called by processes which are generating dirty* data |
get_cmdline | get_cmdline() - copy the cmdline value to a buffer.*@task: the task whose cmdline value to copy.*@buffer: the buffer to copy to.*@buflen: the length of the buffer. Larger cmdline values are truncated* to this length. |
list_lru_add | |
list_lru_del | |
list_lru_walk_one | |
list_lru_walk_node | |
__pte_alloc_kernel | |
copy_one_pte | py one vm_area from one task to the other. Assumes the page tables* already present in the new task to be cleared in the whole range* covered by this vma. |
do_numa_page | |
handle_pte_fault | These routines also need to handle stuff like marking pages dirty* and/or accessed for architectures that don't do it in hardware (most* RISC architectures) |
user_shm_lock | |
user_shm_unlock | |
expand_downwards | vma is the first one with address < vma->vm_start. Have to extend vma. |
map_pte | |
page_vma_mapped_walk | page_vma_mapped_walk - check if @pvmw->page is mapped in @pvmw->vma at*@pvmw->address*@pvmw: pointer to struct page_vma_mapped_walk. page, vma, address and flags* must be set. pmd, pte and ptl must be NULL.* Returns true if the page is mapped in the vma |
__anon_vma_prepare | __anon_vma_prepare - attach an anon_vma to a memory region*@vma: the memory region in question* This makes sure the memory mapping described by 'vma' has* an 'anon_vma' attached to it, so that we can associate the |
try_to_unmap_one | @arg: enum ttu_flags will be passed to this argument |
free_vmap_area | Free a region of KVA allocated by alloc_vmap_area |
alloc_vmap_area | Allocate a region of KVA of the specified size and alignment, within the* vstart and vend. |
__purge_vmap_area_lazy | Purges all lazily-freed vmap areas. |
free_vmap_area_noflush | Free a vmap area, caller ensuring that the area has been unmapped* and flush_cache_vunmap had been called for the correct range* previously. |
find_vmap_area | |
new_vmap_block | w_vmap_block - allocates new vmap_block and occupies 2^order pages in this* block |
free_vmap_block | |
purge_fragmented_blocks | |
vb_alloc | |
vb_free | |
_vm_unmap_aliases | |
setup_vmalloc_vm | |
remove_vm_area | move_vm_area - find and remove a continuous kernel virtual area*@addr: base address* Search for the kernel VM area starting at @addr, and remove it.* This function returns the found VM area, but using it is NOT safe |
vread | vread() - read vmalloc area in a safe way.*@buf: buffer for reading data*@addr: vm address.*@count: number of bytes to be read.* This function checks that addr is a valid vmalloc'ed area, and* copy data from that area to a given buffer |
vwrite | vwrite() - write vmalloc area in a safe way.*@buf: buffer for source data*@addr: vm address.*@count: number of bytes to be read.* This function checks that addr is a valid vmalloc'ed area, and* copy data from a buffer to the given addr |
s_start | |
free_pcppages_bulk | Frees a number of pages from the PCP lists* Assumes all pages on list are in same zone, and of same order.* count is the number of pages to free.* If the zone was previously in an "all pages pinned" state then look to |
free_one_page | |
rmqueue_bulk | Obtain a specified number of elements from the buddy allocator, all under* a single hold of the lock, for efficiency. Add them to the supplied list.* Returns the number of new pages which were placed at *list. |
__build_all_zonelists | |
setup_per_zone_wmarks | setup_per_zone_wmarks - called when min_free_kbytes changes* or when memory is hot-{added|removed}* Ensures that the watermark[min,low,high] values for each zone are set* correctly with respect to min_free_kbytes. |
lock_cluster | |
lock_cluster_or_swap_info | Determine the locking method in use for this device. Return* swap_cluster_info if SSD-style cluster-based locking is in place. |
swap_do_scheduled_discard | Doing discard actually. After a cluster discard is finished, the cluster* will be added to free cluster list. caller should hold si->lock. |
swap_discard_work | |
del_from_avail_list | |
add_to_avail_list | |
scan_swap_map_slots | |
get_swap_pages | |
get_swap_page_of_type | The only caller of this function is now suspend routine |
swap_info_get | |
swap_info_get_cont | |
put_swap_page | Called after dropping swapcache to decrease refcnt to swap entries. |
try_to_unuse | If the boolean frontswap is true, only unuse pages_to_unuse pages;* pages_to_unuse==0 means all pages; ignored if frontswap is false |
drain_mmlist | After a successful try_to_unuse, if no swap is now in use, we know* we can empty the mmlist. swap_lock must be held on entry and exit.* Note that mmlist_lock nests inside swap_lock, and an mm must be |
enable_swap_info | |
reinsert_swap_info | |
has_usable_swap | |
SYSCALL_DEFINE1 | |
alloc_swap_info | |
SYSCALL_DEFINE2 | |
si_swapinfo | |
add_swap_count_continuation | add_swap_count_continuation - called when a swap count is duplicated* beyond SWAP_MAP_MAX, it allocates a new page and links that to the entry's* page of the original vmalloc'ed swap_map, to hold the continuation count |
swap_count_continued | swap_count_continued - when the original swap_map count is incremented* from SWAP_MAP_MAX, check if there is already a continuation page to carry* into, carry if so, or else fail until a new continuation page is allocated;* when the original swap_map |
mem_cgroup_throttle_swaprate | |
frontswap_register_ops | Register operations for frontswap |
frontswap_shrink | Frontswap, like a true swap device, may unnecessarily retain pages* under certain circumstances; "shrink" frontswap is essentially a* "partial swapoff" and works by calling try_to_unuse to attempt to* unuse enough frontswap pages to attempt to -- subject |
frontswap_curr_pages | Count and return the number of frontswap pages across all* swap devices. This is exported so that backend drivers can* determine current usage without reading debugfs. |
__zswap_pool_empty | |
__zswap_param_set | val must be a null-terminated string |
zswap_writeback_entry | |
zswap_frontswap_store | attempts to compress and store an single page |
zswap_frontswap_load | rns 0 if the page was successfully decompressed* return -1 on entry not found or error |
zswap_frontswap_invalidate_page | s an entry in zswap |
zswap_frontswap_invalidate_area | s all zswap entries for the given swap type |
hugepage_put_subpool | |
hugepage_subpool_get_pages | Subpool accounting for allocating and reserving pages |
hugepage_subpool_put_pages | Subpool accounting for freeing and unreserving pages.* Return the number of global page reservations that must be dropped.* The return value may only be different than the passed value (delta)* in the case where a subpool minimum size must be maintained. |
region_add | Add the huge page range represented by [f, t) to the reserve* map |
region_chg | Examine the existing reserve map and determine how many* huge pages in the specified range [f, t) are NOT currently* represented. This routine is called before a subsequent* call to region_add that will actually modify the reserve |
region_abort | Abort the in progress add operation. The adds_in_progress field* of the resv_map keeps track of the operations in progress between* calls to region_chg and region_add. Operations are sometimes* aborted after the call to region_chg |
region_del | Delete the specified range [f, t) from the reserve map. If the* t parameter is LONG_MAX, this indicates that ALL regions after f* should be deleted. Locate the regions which intersect [f, t)* and either trim, delete or split the existing regions. |
region_count | Count and return the number of huge pages in the reserve map* that intersect with the range [f, t). |
__free_huge_page | |
prep_new_huge_page | |
dissolve_free_huge_page | Dissolve a given free hugepage into free buddy pages. This function does* nothing for in-use hugepages and non-hugepages.* This function returns values like below:* -EBUSY: failed to dissolved free hugepages or the hugepage is in-use |
alloc_surplus_huge_page | Allocates a fresh surplus page from the page allocator. |
alloc_huge_page_node | page migration callback function |
alloc_huge_page_nodemask | page migration callback function |
gather_surplus_pages | Increase the hugetlb pool such that it can accommodate a reservation* of size 'delta'. |
alloc_huge_page | |
set_max_huge_pages | |
nr_overcommit_hugepages_store | |
hugetlb_overcommit_handler | |
hugetlb_acct_memory | Forward declaration |
hugetlb_cow | Hugetlb_cow() should be called with page lock of the original hugepage held.* Called with hugetlb_instantiation_mutex held and pte_page locked so we* cannot race with other handlers or page migration. |
huge_add_to_page_cache | |
hugetlb_mcopy_atomic_pte | Used by userfaultfd UFFDIO_COPY. Based on mcopy_atomic_pte with* modifications for huge pages. |
hugetlb_unreserve_pages | |
follow_huge_pmd | |
isolate_huge_page | |
putback_active_hugepage | |
move_hugetlb_state | |
mn_itree_inv_start_range | |
mn_itree_inv_end | |
mmu_interval_read_begin | mmu_interval_read_begin - Begin a read side critical section against a VA* range* mmu_iterval_read_begin()/mmu_iterval_read_retry() implement a* collision-retry scheme similar to seqcount for the VA range under mni |
mn_hlist_release | This function can't run concurrently against mmu_notifier_register* because mm->mm_users > 0 during mmu_notifier_register and exit_mmap* runs with mm_users == 0 |
__mmu_notifier_register | Same as mmu_notifier_register but here the caller must hold the mmap_sem in* write mode. A NULL mn signals the notifier is being registered for itree* mode. |
find_get_mmu_notifier | |
mmu_notifier_unregister | This releases the mm_count pin automatically and frees the mm* structure if it was the last user of it. It serializes against* running mmu notifiers with SRCU and against mmu_notifier_unregister* with the unregister lock + SRCU |
mmu_notifier_put | mmu_notifier_put - Release the reference on the notifier*@mn: The notifier to act on* This function must be paired with each mmu_notifier_get(), it releases the* reference obtained by the get. If this is the last reference then process |
__mmu_interval_notifier_insert | |
mmu_interval_notifier_remove | mmu_interval_notifier_remove - Remove a interval notifier*@mni: Interval notifier to unregister* This function must be paired with mmu_interval_notifier_insert() |
scan_get_next_rmap_item | |
__ksm_enter | |
__ksm_exit | |
cache_free_pfmemalloc | |
__drain_alien_cache | |
__cache_free_alien | |
do_drain | |
cache_grow_end | |
cache_alloc_pfmemalloc | |
cache_alloc_refill | |
____cache_alloc_node | A interface to enable slab creation on nodeid |
cache_flusharray | |
get_partial_node | Try to allocate a partial slab from a specific node. |
deactivate_slab | Remove the cpu slab |
__migration_entry_wait | Something used the pte of a page under migration. We need to* get to the page and wait until migration is finished.* When we return from this function the fault will be retried. |
__buffer_migrate_page | |
do_huge_pmd_wp_page | |
do_huge_pmd_numa_page | NUMA hinting page fault entry point for trans huge pmds |
split_huge_page_to_list | This function splits huge page into normal pages. @page can point to any* subpage of huge page to split. Split doesn't change the position of @page.* Only caller must hold pin on the @page, otherwise split fails with -EBUSY.* The huge page must be locked. |
__khugepaged_enter | |
__khugepaged_exit | |
__collapse_huge_page_copy | |
collapse_huge_page | |
khugepaged_scan_mm_slot | |
khugepaged_do_scan | |
khugepaged | |
mem_cgroup_under_move | A routine for checking "mem" is under move_account() or not.* Checking a cgroup is mc.from or mc.to or under hierarchy of* moving cgroups. This is for waiting at high-memory pressure* caused by "move". |
mem_cgroup_oom_trylock | Check OOM-Killer is already running under our hierarchy.* If someone is running, return false. |
mem_cgroup_oom_unlock | |
mem_cgroup_mark_under_oom | |
mem_cgroup_unmark_under_oom | |
mem_cgroup_oom_notify_cb | |
mem_cgroup_oom_register_event | |
mem_cgroup_oom_unregister_event | |
memcg_event_wake | Gets called on EPOLLHUP on eventfd when user closes it.* Called with wqh->lock held and interrupts disabled. |
memcg_write_event_control | DO NOT USE IN NEW FILES.* Parse input and register new cgroup event handler.* Input must be in format ' |
mem_cgroup_css_offline | |
mem_cgroup_clear_mc | |
mem_cgroup_can_attach | |
vmpressure_work_fn | |
vmpressure | vmpressure() - Account memory pressure through scanned/reclaimed ratio*@gfp: reclaimer's gfp mask*@memcg: cgroup memory controller handle*@tree: legacy subtree mode*@scanned: number of pages scanned*@reclaimed: number of pages reclaimed* This function |
hugetlb_cgroup_css_offline | Force the hugetlb cgroup to empty the hugetlb resources by moving them to* the parent cgroup. |
hugetlb_cgroup_migrate | hugetlb_lock will make sure a parallel cgroup rmdir won't happen* when we migrate hugepages |
zpool_register_driver | zpool_register_driver() - register a zpool implementation.*@driver: driver to register |
zpool_unregister_driver | zpool_unregister_driver() - unregister a zpool implementation |
zpool_get_driver | his assumes @type is null-terminated. |
zpool_create_pool | zpool_create_pool() - Create a new zpool*@type: The type of the zpool to create (e.g. zbud, zsmalloc)*@name: The name of the zpool (e.g. zram0, zswap)*@gfp: The GFP flags to use when allocating the pool.*@ops: The optional ops callback. |
zpool_destroy_pool | zpool_destroy_pool() - Destroy a zpool*@zpool: The zpool to destroy.* Implementations must guarantee this to be thread-safe,* however only when destroying different pools. The same* pool should only be destroyed once, and should not be used |
zbud_alloc | zbud_alloc() - allocates a region of a given size*@pool: zbud pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt to |
zbud_free | zbud_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by zbud_alloc()* In the case that the zbud page in which the allocation resides is |
zbud_reclaim_page | zbud_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* zbud reclaim is different from |
zs_malloc | zs_malloc - Allocate block of given size from pool.*@pool: pool to allocate from*@size: size of block to allocate*@gfp: gfp flags when allocating object* On success, handle to the allocated object is returned,* otherwise 0. |
zs_free | |
__zs_compact | |
z3fold_page_lock | Lock a z3fold page |
__release_z3fold_page | |
release_z3fold_page_locked_list | |
free_pages_work | |
add_to_unbuddied | Add to the appropriate unbuddied list |
do_compact_page | |
__z3fold_alloc | rns _locked_ z3fold page header or NULL |
z3fold_alloc | z3fold_alloc() - allocates a region of a given size*@pool: z3fold pool from which to allocate*@size: size in bytes of the desired allocation*@gfp: gfp flags used if the pool needs to grow*@handle: handle of the new allocation* This function will attempt |
z3fold_free | z3fold_free() - frees the allocation associated with the given handle*@pool: pool in which the allocation resided*@handle: handle associated with the allocation returned by z3fold_alloc()* In the case that the z3fold page in which the allocation resides |
z3fold_reclaim_page | z3fold_reclaim_page() - evicts allocations from a pool page and frees it*@pool: pool from which a page will attempt to be evicted*@retries: number of pages on the LRU list for which eviction will* be attempted before failing* z3fold reclaim is different |
z3fold_page_isolate | |
z3fold_page_migrate | |
z3fold_page_putback | |
cma_add_to_cma_mem_list | |
cma_get_entry_from_list | |
ipc_addid | pc_addid - add an ipc identifier*@ids: ipc identifier set*@new: new ipc permission set*@limit: limit for the number of used ids* Add an entry 'new' to the ipc ids idr |
complexmode_enter | Enter the mode suitable for non-simple operations:* Caller must own sem_perm.lock. |
sem_lock | If the request contains only one semaphore operation, and there are* no complex transactions pending, lock only the semaphore involved |
freeary | Free a semaphore set. freeary() is called with sem_ids.rwsem locked* as a writer and the spinlock for this semaphore set hold. sem_ids.rwsem* remains locked on exit. |
find_alloc_undo | d_alloc_undo - lookup (and if not present create) undo array*@ns: namespace*@semid: semaphore array id* The function looks up (and if not present creates) the undo structure.* The size of the undo structure depends on the size of the semaphore |
exit_sem | add semadj values to semaphores, free undo structures |
get_ns_from_inode | |
mqueue_get_inode | |
mqueue_evict_inode | |
mqueue_create_attr | |
mqueue_read_file | This is routine for system read from queue file |
mqueue_flush_file | |
mqueue_poll_file | |
wq_sleep | Puts current task to sleep. Caller must hold queue lock. After return* lock isn't held. |
do_mq_timedsend | |
do_mq_timedreceive | |
do_mq_notify | Notes: the case when user wants us to deregister (with NULL as pointer)* and he isn't currently owner of notification, will be silently discarded.* It isn't explicitly defined in the POSIX. |
do_mq_getsetattr | |
bio_alloc_rescue | |
punt_bios_to_rescuer | |
elevator_get | |
elv_register | |
elv_unregister | |
elevator_get_by_features | Get the first elevator providing the features required by the request queue.* Default to "none" if no matching elevator is found. |
elv_iosched_show | |
ioc_create_icq | _create_icq - create and link io_cq*@ioc: io_context of interest*@q: request_queue of interest*@gfp_mask: allocation mask* Make sure io_cq linking @ioc and @q exists |
flush_busy_ctx | |
dispatch_rq_from_ctx | |
blk_mq_dispatch_wake | |
blk_mq_mark_tag_wait | Mark us waiting for a tag. For shared tags, this involves hooking us into* the tag wakeups. For non-shared tags, we can simply mark us needing a* restart. For both cases, take care to check the condition again after* marking us as waiting. |
blk_mq_dispatch_rq_list | Returns true if we did some work AND can potentially do more. |
blk_mq_request_bypass_insert | Should only be used carefully, when the caller knows we want to* bypass a potential IO scheduler on the target device. |
blk_mq_insert_requests | |
blk_mq_hctx_notify_dead | 'cpu' is going away. splice any existing rq_list entries from this* software queue to the hw queue dispatch list, and ensure that it* gets run. |
blk_mq_exit_hctx | |
blk_mq_alloc_and_init_hctx | |
blk_stat_add_callback | |
blk_stat_remove_callback | |
blk_stat_enable_accounting | |
blk_mq_sched_dispatch_requests | |
__blk_mq_sched_bio_merge | |
blk_mq_sched_bypass_insert | |
blk_mq_sched_insert_request | |
blkg_create | If @new_blkg is %NULL, this function tries to allocate a new one as* necessary using %GFP_NOWAIT. @new_blkg is always consumed on return. |
blkg_destroy_all | lkg_destroy_all - destroy all blkgs associated with a request_queue*@q: request_queue of interest* Destroy all blkgs associated with @q. |
iolatency_clear_scaling | |
ioc_timer_fn | |
ioc_pd_free | |
ioc_weight_write | |
dd_dispatch_request | One confusing aspect here is that we get called for a specific* hardware queue, but we may return a request that is for a* different hardware queue. This is because mq-deadline has shared* state for all hardware queues, in terms of sorting, FIFOs, etc. |
dd_bio_merge | |
dd_insert_requests | |
kyber_bio_merge | |
kyber_insert_requests | |
flush_busy_kcq | |
kyber_dispatch_request | |
hctx_dispatch_start | |
ctx_default_rq_list_start | |
ctx_read_rq_list_start | |
ctx_poll_rq_list_start | |
key_gc_unused_keys | Garbage collect a list of unreferenced, detached keys |
key_garbage_collector | Reaper for unused keys. |
key_user_lookup | Get the key quota record for a user, allocating a new record if one doesn't* already exist. |
key_alloc_serial | Allocate a serial number for a key. These are assigned randomly to avoid* security issues through covert channel problems. |
key_alloc | key_alloc - Allocate a key of the specified type.*@type: The type of key to allocate.*@desc: The key description to allow the key to be searched out.*@uid: The owner of the new key.*@gid: The group ID for the new key's group permissions. |
key_payload_reserve | key_payload_reserve - Adjust data quota reservation for the key's payload*@key: The key to make the reservation for |
key_lookup | Find a key by its serial number. |
keyctl_chown_key | Change the ownership of a key* The key must grant the caller Setattr permission for this to work, though* the key need not be fully instantiated yet. For the UID to be changed, or* for the GID to be changed to a group the caller is not a member of, the |
proc_keys_start | |
proc_key_users_start | |
inode_free_security | |
sb_finish_set_opts | |
inode_doinit_with_dentry | |
flush_unauthorized_files | Derived from fs/exec.c:flush_old_files. |
selinux_inode_post_setxattr | |
selinux_inode_setsecurity | |
selinux_task_to_inode | |
selinux_socket_accept | |
selinux_inode_invalidate_secctx | |
tomoyo_write_log2 | moyo_write_log2 - Write an audit log.*@r: Pointer to "struct tomoyo_request_info".*@len: Buffer size needed for @fmt and @args.*@fmt: The printf()'s format string.*@args: va_list structure for @fmt.* Returns nothing. |
tomoyo_read_log | moyo_read_log - Read an audit log.*@head: Pointer to "struct tomoyo_io_buffer".* Returns nothing. |
tomoyo_write_profile | moyo_write_profile - Write profile table.*@head: Pointer to "struct tomoyo_io_buffer".* Returns 0 on success, negative value otherwise. |
tomoyo_supervisor | moyo_supervisor - Ask for the supervisor's decision |
tomoyo_find_domain_by_qid | |
tomoyo_read_query | moyo_read_query - Read access requests which violated policy in enforcing mode.*@head: Pointer to "struct tomoyo_io_buffer". |
tomoyo_write_answer | moyo_write_answer - Write the supervisor's decision.*@head: Pointer to "struct tomoyo_io_buffer".* Returns 0 on success, -EINVAL otherwise. |
tomoyo_struct_used_by_io_buffer | moyo_struct_used_by_io_buffer - Check whether the list element is used by /sys/kernel/security/tomoyo/ users or not.*@element: Pointer to "struct list_head".* Returns true if @element is used by /sys/kernel/security/tomoyo/ users,* false otherwise. |
tomoyo_name_used_by_io_buffer | moyo_name_used_by_io_buffer - Check whether the string is used by /sys/kernel/security/tomoyo/ users or not.*@string: String to check.* Returns true if @string is used by /sys/kernel/security/tomoyo/ users,* false otherwise. |
tomoyo_gc_thread | moyo_gc_thread - Garbage collector thread function.*@unused: Unused.* Returns 0. |
tomoyo_notify_gc | moyo_notify_gc - Register/unregister /sys/kernel/security/tomoyo/ users.*@head: Pointer to "struct tomoyo_io_buffer".*@is_register: True if register, false if unregister.* Returns nothing. |
multi_transaction_set | does not increment @new's count |
multi_transaction_read | |
aa_get_buffer | |
aa_put_buffer | |
destroy_buffers | |
update_file_ctx | |
revalidate_tty | |
yama_relation_cleanup | |
yama_ptracer_add | yama_ptracer_add - add/replace an exception for this tracer/tracee pair*@tracer: the task_struct of the process doing the ptrace*@tracee: the task_struct of the process to be ptraced* Each tracee can have, at most, one tracer registered. Each time this |
loadpin_read_file | |
ima_init_template_list | |
restore_template_fmt | |
generic_file_llseek_size | generic_file_llseek_size - generic llseek implementation for regular files*@file: file structure to seek on*@offset: file offset to seek to*@whence: type of seek*@size: max size of this file in file system*@eof: offset used for SEEK_END position* This is |
put_super | put_super - drop a temporary reference to superblock*@sb: superblock in question* Drops a temporary reference, frees superblock if there's no* references left. |
generic_shutdown_super | generic_shutdown_super - common helper for ->kill_sb()*@sb: superblock to kill* generic_shutdown_super() does all fs-independent work on superblock* shutdown |
sget_fc | sget_fc - Find or create a superblock*@fc: Filesystem context |
sget | 查找或创建超级块 |
__iterate_supers | |
iterate_supers | rate_supers - call function for all active superblocks*@f: function to call*@arg: argument to pass to it* Scans the superblock list and calls given function, passing it* locked superblock and given argument. |
iterate_supers_type | rate_supers_type - call function for superblocks of given type*@type: fs type*@f: function to call*@arg: argument to pass to it* Scans the superblock list and calls given function, passing it* locked superblock and given argument. |
__get_super | |
get_active_super | get_active_super - get an active reference to the superblock of a device*@bdev: device to get the superblock for* Scans the superblock list and finds the superblock of the file system* mounted on the device given. Returns the superblock with an active |
user_get_super | |
chrdev_open | Called every time a character special file is opened |
cd_forget | |
cdev_purge | |
inode_add_bytes | |
inode_sub_bytes | |
inode_get_bytes | |
de_thread | This function makes sure the current process has its own signal table,* so that flush_signal_handlers can later reset the handlers without* disturbing other processes. (Other processes might share the signal* table via the CLONE_SIGHAND option to clone().) |
check_unsafe_exec | determine how safe it is to execute the proposed program* - the caller must hold ->cred_guard_mutex to protect against* PTRACE_ATTACH or seccomp thread-sync |
put_pipe_info | |
fifo_open | |
do_inode_permission | We _really_ want to just do "generic_permission()" without* even looking at the inode->i_op values. So we keep a cache* flag in inode->i_opflags, that says "this has not special* permission function, use the fast case". |
vfs_tmpfile | |
vfs_link | vfs_link - create a new link*@old_dentry: object to be linked*@dir: new parent*@new_dentry: where to create the new link*@delegated_inode: returns inode needing a delegation break* The caller must hold dir->i_mutex* If vfs_link discovers a delegation on |
vfs_readlink | vfs_readlink - copy symlink body into userspace buffer*@dentry: dentry on which to get symbolic link*@buffer: user memory pointer*@buflen: size of buffer* Does not touch atime. That's up to the caller if necessary* Does not call security hook. |
setfl | |
fcntl_rw_hint | |
fasync_remove_entry | Remove a fasync entry. If successfully removed, return* positive and clear the FASYNC flag. If no entry exists,* do nothing and return 0.* NOTE! It is very important that the FASYNC flag always* match the state "is the filp on a fasync list". |
fasync_insert_entry | Insert a new entry into the fasync list. Return the pointer to the* old one if we didn't use the new one.* NOTE! It is very important that the FASYNC flag always* match the state "is the filp on a fasync list". |
ioctl_fionbio | |
take_dentry_name_snapshot | |
d_drop | |
__dentry_kill | |
__lock_parent | |
dentry_kill | Finish off a dentry we've decided to kill.* Returns dentry requiring refcount drop, or NULL if we're done. |
fast_dput | Try to do a lockless dput(), and return whether that was successful |
dget_parent | |
d_find_any_alias | d_find_any_alias - find any alias for a given inode*@inode: inode to find an alias for* If any aliases exist for the given inode, take and return a* reference for one of them. If no aliases exist, return %NULL. |
__d_find_alias | d_find_alias - grab a hashed alias of inode*@inode: inode in question* If inode has a hashed alias, or is a directory and has any alias,* acquire the reference to alias and return it |
d_find_alias | 获取索引节点在哈希表中的别名 |
d_prune_aliases | Try to kill dentries associated with this inode.* WARNING: you must own a reference to inode. |
shrink_lock_dentry | Lock a dentry from shrink list |
shrink_dentry_list | |
d_walk | d_walk - walk the dentry tree*@parent: start of walk*@data: data passed to @enter() and @finish()*@enter: callback when first entering the dentry* The @enter() callbacks are called with d_lock held. |
d_set_mounted | Called by mount code to set a mountpoint and check if the mountpoint is* reachable (e.g. NFS can unhash a directory dentry and then the complete* subtree can become unreachable).* Only one of d_invalidate() and d_set_mounted() must succeed. For |
shrink_dcache_parent | 收缩高速缓存区 |
d_invalidate | 废除目录项 |
d_alloc | 分配高速缓存区 |
d_set_fallthru | d_set_fallthru - Mark a dentry as falling through to a lower layer*@dentry - The dentry to mark* Mark a dentry as falling through to the lower layer (as set with* d_pin_lower()). This flag may be recorded on the medium. |
__d_instantiate | |
d_instantiate | 为目录项建立索引 |
d_instantiate_new | This should be equivalent to d_instantiate() + unlock_new_inode(),* with lockdep-related part of unlock_new_inode() done before* anything else. Use that instead of open-coding d_instantiate()/* unlock_new_inode() combinations. |
__d_instantiate_anon | |
__d_lookup | __d_lookup - search for a dentry (racy)*@parent: parent dentry*@name: qstr of name we wish to find* Returns: dentry, or NULL* __d_lookup is like d_lookup, however it may (rarely) return a* false-negative result due to unrelated rename activity |
d_delete | 删除目录项 |
d_rehash | 在哈希表中添加目录项 |
d_wait_lookup | |
d_alloc_parallel | |
__d_add | |
d_add | 添加目录项到哈希队列 |
d_exact_alias | d_exact_alias - find and hash an exact unhashed alias*@entry: dentry to add*@inode: The inode to go with this dentry* If an unhashed dentry with the same name/parent and desired* inode already exists, hash and return it. Otherwise, return* NULL. |
__d_move | __d_move - move a dentry*@dentry: entry to move*@target: new dentry*@exchange: exchange the two dentries* Update the dcache to reflect the move of a file name |
d_splice_alias | 链接目录项 |
d_tmpfile | |
inode_sb_list_add | de_sb_list_add - add inode to the superblock list of inodes*@inode: inode to add |
inode_sb_list_del | |
__insert_inode_hash | 在哈希表中插入索引节点 |
__remove_inode_hash | __remove_inode_hash - remove an inode from the hash*@inode: inode to unhash* Remove an inode from the superblock. |
evict | Free the inode passed in, removing it from the lists it is still connected* to |
evict_inodes | vict_inodes - evict all evictable inodes for a superblock*@sb: superblock to operate on* Make sure that no inodes with zero refcount are retained |
invalidate_inodes | validate_inodes - attempt to free all inodes on a superblock*@sb: superblock to operate on*@kill_dirty: flag to guide handling of dirty inodes* Attempts to free all inodes for a given superblock. If there were any |
inode_lru_isolate | Isolate the inode from the LRU in preparation for freeing it |
find_inode | Called with the inode lock held. |
find_inode_fast | d_inode_fast is the fast path version of find_inode, see the comment at* iget_locked for details. |
new_inode_pseudo | w_inode_pseudo - obtain an inode*@sb: superblock* Allocates a new inode for given superblock.* Inode wont be chained in superblock s_inodes list* This means :* - fs can't be unmount* - quotas, fsnotify, writeback can't work |
unlock_new_inode | lock_new_inode - clear the I_NEW state and wake up any waiters*@inode: new inode to unlock* Called when the inode is fully initialised to clear the new state of the* inode and wake up anyone waiting for the inode to finish initialisation. |
discard_new_inode | |
inode_insert5 | de_insert5 - obtain an inode from a mounted file system*@inode: pre-allocated inode to use for insert to cache*@hashval: hash value (usually inode number) to get*@test: callback used for comparisons between inodes*@set: callback used to initialize a new |
iget_locked | 从文件系统上获得索引节点 |
test_inode_iunique | search the inode cache for a matching inode number.* If we find one, then the inode number we are trying to* allocate is not unique and so we should not use it.* Returns 1 if the inode number is unique, 0 if it is not. |
iunique | 取索引节点ID |
igrab | |
ilookup5_nowait | lookup5_nowait - search for an inode in the inode cache*@sb: super block of file system to search*@hashval: hash value (usually inode number) to search for*@test: callback used for comparisons between inodes*@data: opaque data pointer to pass to @test |
ilookup | 在高速缓存查找索引节点 |
find_inode_nowait | d_inode_nowait - find an inode in the inode cache*@sb: super block of file system to search*@hashval: hash value (usually inode number) to search for*@match: callback used for comparisons between inodes*@data: opaque data pointer to pass to @match* Search |
insert_inode_locked | |
iput_final | Called when we're dropping the last reference* to an inode |
__wait_on_freeing_inode | |
expand_fdtable | Expand the file descriptor table.* This function will allocate a new fdtable and both fd array and fdset, of* the given size.* Return <0 error code on error; 1 on successful completion. |
expand_files | Expand files.* This function will expand the file structures, if the requested size exceeds* the current capacity and there is room for expansion.* Return <0 error code on error; 0 when nothing done; 1 when files were |
dup_fd | Allocate a new files structure and copy contents from the* passed in files structure.* errorp will be valid only when the returned files_struct is NULL. |
__alloc_fd | allocate a file descriptor, mark it busy. |
put_unused_fd | |
__fd_install | Install a file pointer in the fd array.* The VFS is full of places where we drop the files lock between* setting the open_fds bitmap and installing the file in the file* array. At any such point, we are vulnerable to a dup2() race |
__close_fd | The same warnings as for __alloc_fd()/__fd_install() apply here... |
__close_fd_get_file | variant of __close_fd that gets a ref on the file for later fput |
do_close_on_exec | |
set_close_on_exec | We only lock f_pos if we have threads or if the file might be* shared with another process. In both cases we'll have an elevated* file count (done either by fdget() or by fork()). |
replace_fd | |
ksys_dup3 | |
iterate_fd | |
__put_mountpoint | vfsmount lock must be held. Additionally, the caller is responsible* for serializing calls for given disposal list. |
simple_xattr_get | xattr GET operation for in-memory/pseudo filesystems |
simple_xattr_set | simple_xattr_set - xattr SET operation for in-memory/pseudo filesystems*@xattrs: target simple_xattr list*@name: name of the extended attribute*@value: value of the xattr |
simple_xattr_list | xattr LIST operation for in-memory/pseudo filesystems |
simple_xattr_list_add | Adds an extended attribute to the list |
scan_positives | Returns an element of siblings' list.* We are looking for ; if* found, dentry is grabbed and returned to caller.* If no such element exists, NULL is returned. |
dcache_dir_lseek | |
dcache_readdir | Directory is locked and all positive dentries in it are safe, since* for ramfs-type trees they can't go away without unlink() or rmdir(),* both impossible due to the lock on directory. |
simple_empty | |
simple_pin_fs | |
simple_release_fs | |
simple_transaction_get | |
locked_inode_to_wb_and_lock_list | |
inode_to_wb_and_lock_list | |
__inode_wait_for_writeback | Wait for writeback on an inode to complete. Called with i_lock held.* Caller must make sure inode cannot go away when we drop i_lock. |
inode_wait_for_writeback | Wait for writeback on an inode to complete. Caller must have inode pinned. |
__writeback_single_inode | Write out an inode and its dirty pages. Do not update the writeback list* linkage. That is left to the caller. The caller is also responsible for* setting I_SYNC flag and calling inode_sync_complete() to clear it. |
writeback_single_inode | Write out an inode's dirty pages. Either the caller has an active reference* on the inode or the inode has I_WILL_FREE set.* This function is designed to be called for writing back one inode which* we go e |
writeback_sb_inodes | Write a portion of b_io inodes which belong to @sb.* Return the number of pages and/or inodes written.* NOTE! This is called with wb->list_lock held, and will* unlock and relock that for each inode it ends up doing* IO for. |
writeback_inodes_wb | |
wb_writeback | Explicit flushing or periodic writeback of "old" data |
block_dump___mark_inode_dirty | |
__mark_inode_dirty | __mark_inode_dirty - internal function*@inode: inode to mark*@flags: what kind of dirty (i |
wait_sb_inodes | The @s_sync_lock is used to serialise concurrent sync operations* to avoid lock contention problems with concurrent wait_sb_inodes() calls.* Concurrent callers will block on the s_sync_lock rather than doing contending* walks |
fsstack_copy_inode_size | does _NOT_ require i_mutex to be held.* This function cannot be inlined since i_size_{read,write} is rather* heavy-weight on 32-bit systems |
set_fs_root | Replace the fs->{rootmnt,root} with {mnt,dentry}. Put the old values.* It can block. |
set_fs_pwd | Replace the fs->{pwdmnt,pwd} with {mnt,dentry}. Put the old values.* It can block. |
chroot_fs_refs | |
exit_fs | |
copy_fs_struct | |
unshare_fs_struct | |
pin_remove | |
pin_insert | |
__find_get_block_slow | Various filesystems appear to want __find_get_block to be non-blocking |
osync_buffers_list | sync is designed to support O_SYNC io |
mark_buffer_dirty_inode | |
__set_page_dirty_buffers | Add a page to the dirty page list.* It is a sad fact of life that this function is called from several places* deeply under spinlocking. It may not sleep.* If the page has buffers, the uptodate buffers are set dirty, to preserve |
fsync_buffers_list | |
invalidate_inode_buffers | Invalidate any and all dirty buffers on a given inode. We are* probably unmounting the fs, but that doesn't mean we have already* done a sync(). Just drop the buffers from the inode list.* NOTE: we take the inode's blockdev's mapping's private_lock. Which |
remove_inode_buffers | Remove any clean buffers from the inode's buffer list. This is called* when we're trying to free the inode itself. Those buffers can pin it.* Returns true if all buffers were removed. |
grow_dev_page | Create the page-cache page that contains the requested block.* This is used purely for blockdev mappings. |
__bforget | rget() is like brelse(), except it discards any* potentially dirty data. |
create_empty_buffers | We attach and possibly dirty the buffers atomically wrt* __set_page_dirty_buffers() via private_lock. try_to_free_buffers* is already excluded via the page lock. |
attach_nobh_buffers | Attach the singly-linked list of buffers created by nobh_write_begin, to* the page (converting it to circular linked list and taking care of page* dirty races). |
try_to_free_buffers | |
bdev_write_inode | |
bdev_evict_inode | |
bdget | |
nr_blockdev_pages | |
bd_acquire | |
bd_forget | Call when you free inode |
bd_prepare_to_claim | d_prepare_to_claim - prepare to claim a block device*@bdev: block device of interest*@whole: the whole device containing @bdev, may equal @bdev*@holder: holder trying to claim @bdev* Prepare to claim @bdev |
bd_start_claiming | d_start_claiming - start claiming a block device*@bdev: block device of interest*@holder: holder trying to claim @bdev*@bdev is about to be opened exclusively |
bd_finish_claiming | d_finish_claiming - finish claiming of a block device*@bdev: block device of interest*@whole: whole block device (returned from bd_start_claiming())*@holder: holder that has claimed @bdev* Finish exclusive open of a block device |
bd_abort_claiming | d_abort_claiming - abort claiming of a block device*@bdev: block device of interest*@whole: whole block device (returned from bd_start_claiming())*@holder: holder that has claimed @bdev* Abort claiming of a block device when the exclusive open failed |
blkdev_put | |
iterate_bdevs | |
fsnotify_unmount_inodes | snotify_unmount_inodes - an sb is unmounting. handle any watched inodes.*@sb: superblock being unmounted.* Called during unmount with no locks held, so needs to be safe against* concurrent modifiers. We temporarily drop sb->s_inode_list_lock and CAN block. |
__fsnotify_update_child_dentry_flags | Given an inode, first check if we care what happens to our children. Inotify* and dnotify both tell their parents about events. If we care about any event* on a child we run all of our children and set a dentry flag saying that the* parent cares |
fsnotify_destroy_event | |
fsnotify_add_event | Add an event to the group notification queue |
fsnotify_flush_notify | Called when a group is being torn down to clean up any outstanding* event notifications. |
fsnotify_group_stop_queueing | Stop queueing new events for this group. Once this function returns* fsnotify_add_event() will not add any new events to the group's queue. |
fsnotify_recalc_mask | Calculate mask of events for a list of marks. The caller must make sure* connector and connector->obj cannot disappear under us. Callers achieve* this by holding a mark->lock or mark->group->mark_mutex for a mark on this* list. |
fsnotify_connector_destroy_workfn | |
fsnotify_put_mark | |
fsnotify_get_mark_safe | Get mark reference when we found the mark via lockless traversal of object* list. Mark can be already removed from the list by now and on its way to be* destroyed once SRCU period ends.* Also pin the group so it doesn't disappear under us. |
fsnotify_detach_mark | Mark mark as detached, remove it from group list |
fsnotify_free_mark | Free fsnotify mark |
fsnotify_grab_connector | Get mark connector, make sure it is alive and return with its lock held.* This is for users that get connector pointer from inode or mount. Users that* hold reference to a mark on the list may directly lock connector->lock as |
fsnotify_add_mark_list | Add mark into proper place in given list of marks. These marks may be used* for the fsnotify backend to determine which event types should be delivered* to which group and for which inodes. These marks are ordered according to |
fsnotify_add_mark_locked | Attach an initialized mark to a given group and fs object.* These marks may be used for the fsnotify backend to determine which* event types should be delivered to which group. |
fsnotify_destroy_marks | Destroy all marks attached to an object via connector |
fsnotify_mark_destroy_workfn | |
dnotify_handle_event | Mains fsnotify call where events are delivered to dnotify.* Find the dnotify mark on the relevant inode, run the list of dnotify structs* on that mark and determine which of them has expressed interest in receiving* events of this type |
dnotify_flush | Called every time a file is closed. Looks first for a dnotify mark on the* inode. If one is found run all of the ->dn structures attached to that* mark for one relevant to this process closing the file and remove that* dnotify_struct |
fcntl_dirnotify | When a process calls fcntl to attach a dnotify watch to a directory it ends* up here. Allocate both a mark for fsnotify to add and a dnotify_struct to be* attached to the fsnotify_mark. |
inotify_poll | y userspace file descriptor functions |
inotify_read | |
inotify_ioctl | |
inotify_add_to_idr | |
inotify_idr_find | |
inotify_remove_from_idr | Remove the mark from the idr (if present) and drop the reference* on the mark because it was in the idr. |
inotify_update_existing_watch | |
fanotify_get_response | Wait for response to permission event |
get_one_event | Get an fsnotify notification event if one exists and is small* enough to fit in "count". Return an error pointer if the count* is not large enough. When permission event is dequeued, its state is* updated accordingly. |
process_access_response | |
fanotify_poll | y userspace file descriptor functions |
fanotify_read | |
fanotify_release | |
fanotify_ioctl | |
fanotify_mark_remove_from_mask | |
fanotify_mark_add_to_mask | |
ep_remove | Removes a "struct epitem" from the eventpoll RB tree and deallocates* all the associated resources. Must be called with "mtx" held. |
ep_insert | Must be called with "mtx" held. |
__timerfd_remove_cancel | |
timerfd_remove_cancel | |
timerfd_setup_cancel | |
userfaultfd_ctx_read | |
put_aio_ring_file | |
aio_ring_mremap | |
aio_migratepage | |
ioctx_add_table | |
aio_nr_sub | |
ioctx_alloc | x_alloc* Allocates and initializes an ioctx. Returns an ERR_PTR if it failed. |
kill_ioctx | kill_ioctx* Cancels all outstanding aio requests on an aio context. Used* when the processes owning a context have all exited to encourage* the rapid destruction of the kioctx. |
aio_poll_cancel | assumes we are called with irqs disabled |
aio_poll | |
io_poll_remove_one | |
io_poll_add | |
__fscrypt_prepare_lookup | |
evict_dentries_for_decrypted_inodes | |
check_for_busy_inodes | |
put_crypt_info | |
fscrypt_get_encryption_info | |
find_or_insert_direct_key | Find/insert the given key into the fscrypt_direct_keys table. If found, it* is returned with elevated refcount, and 'to_insert' is freed if non-NULL. If* not found, 'to_insert' is inserted and returned if it's non-NULL; otherwise* NULL is returned. |
locks_move_blocks | |
locks_insert_global_locks | Must be called with the flc_lock held! |
locks_delete_global_locks | Must be called with the flc_lock held! |
locks_delete_block | locks_delete_lock - stop waiting for a file lock*@waiter: the lock which was waiting* lockd/nfsd need to disconnect the lock while working on it. |
locks_insert_block | Must be called with flc_lock held. |
locks_wake_up_blocks | Wake up processes blocked waiting for blocker.* Must be called with the inode->flc_lock held! |
posix_test_lock | |
flock_lock_inode | Try to create a FLOCK lock on filp |
posix_lock_inode | |
__break_lease | 撤销所有未偿还的文件 |
lease_get_mtime | lease_get_mtime - update modified time of an inode with exclusive lease*@inode: the inode*@time: pointer to a timespec which contains the last modified time* This is to force NFS clients to flush their caches for files with* exclusive leases |
fcntl_getlease | 获取当前文件租约 |
generic_add_lease | |
generic_delete_lease | |
fcntl_setlk | Apply the lock described by l to an open file descriptor.* This implements both the F_SETLK and F_SETLKW commands of fcntl(). |
fcntl_setlk64 | Apply the lock described by l to an open file descriptor.* This implements both the F_SETLK and F_SETLKW commands of fcntl(). |
locks_remove_lease | The i_flctx must be valid when calling into here |
locks_remove_file | This function is called on the last close of an open file. |
show_fd_locks | |
locks_start | |
mb_cache_entry_create | mb_cache_entry_create - create entry in cache*@cache - cache where the entry should be created*@mask - gfp mask with which the entry should be allocated*@key - key of the entry*@value - value of the entry*@reusable - is the entry reusable by others? |
mb_cache_entry_delete | mb_cache_entry_delete - remove a cache entry*@cache - cache we work with*@key - key*@value - value* Remove entry from cache @cache with key @key and value @value. |
mb_cache_shrink | |
locks_start_grace | locks_start_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* A grace period is a period during which locks should not be given* out |
locks_end_grace | locks_end_grace*@net: net namespace that this lock manager belongs to*@lm: who this grace period is for* Call this function to state that the given lock manager is ready to* resume regular locking. The grace period will not end until all lock |
drop_pagecache_sb | |
get_vfsmount_from_fd | |
register_quota_format | |
unregister_quota_format | |
find_quota_format | |
dquot_mark_dquot_dirty | Mark dquot dirty in atomic manner, and return it's old dirty flag state |
clear_dquot_dirty | |
mark_info_dirty | |
invalidate_dquots | Invalidate all dquots on the list |
dquot_scan_active | Call callback for every active dquot on given filesystem |
dquot_writeback_dquots | Write all dquot structures to quota files |
dqcache_shrink_scan | |
dqput | Put reference to dquot |
dqget | Get reference to dquot* Locking is slightly tricky here. We are guarded from parallel quotaoff()* destroying our dquot by:* a) checking for quota flags under dq_list_lock and* b) getting a reference to dquot before we release dq_list_lock |
add_dquot_ref | This routine is guarded by s_umount semaphore |
remove_inode_dquot_ref | Remove references to dquots from inode and add dquot to list for freeing* if we have the last reference to dquot |
remove_dquot_ref | |
dquot_add_inodes | |
dquot_add_space | |
__dquot_initialize | |
__dquot_drop | Release all quotas referenced by inode.* This function only be called on inode free or converting* a file to quota file, no other users for the i_dquot in* both cases, so we needn't call synchronize_srcu() after* clearing i_dquot. |
inode_get_rsv_space | |
__dquot_alloc_space | This operation can block, but only after everything is updated |
dquot_alloc_inode | This operation can block, but only after everything is updated |
dquot_claim_space_nodirty | Convert in-memory reserved quotas to real consumed quotas |
dquot_reclaim_space_nodirty | Convert allocated space back to in-memory reserved quotas |
__dquot_free_space | This operation can block, but only after everything is updated |
dquot_free_inode | This operation can block, but only after everything is updated |
__dquot_transfer | Transfer the number of inode and blocks from one diskquota to an other |
dquot_disable | Turn quota off on a device. type == -1 ==> quotaoff for all types (umount) |
dquot_load_quota_sb | |
dquot_resume | Reenable quotas on remount RW |
dquot_quota_enable | |
dquot_quota_disable | |
do_get_dqblk | Generic routine for getting common part of quota structure |
do_set_dqblk | Generic routine for setting common part of quota structure |
dquot_get_state | Generic routine for getting common part of quota file information |
dquot_set_dqinfo | Generic routine for setting common part of quota file information |
v1_write_file_info | |
v2_write_file_info | Write information header to quota file |
qtree_write_dquot | We don't have to be afraid of deadlocks as we never have quotas on quota* files... |
qtree_read_dquot | |
alloc_dcookie | |
free_dcookie | |
write_seqlock | 当前CPU负责更新时间 |
read_seqlock_excl | A locking reader exclusively locks out other writers and locking readers,* but doesn't update the sequence number. Acts like a normal spin_lock/unlock.* Don't need preempt_disable() because that is in the spin_lock already. |
task_lock | Protects ->fs, ->files, ->mm, ->group_info, ->comm, keyring* subscriptions and synchronises with wait4(). Also used in procfs. Also* pins the final release of task.io_context. Also protects ->cpuset and* ->cgroup.subsys[]. And ->vfork_done. |
dont_mount | |
d_lookup_done | |
parent_ino | |
pmd_lock | |
pud_lock | |
huge_pte_lock | |
wb_domain_size_changed | wb_domain_size_changed - memory available to a wb_domain has changed*@dom: wb_domain of interest* This function should be called when the amount of memory available to*@dom has changed |
__netif_tx_lock | |
netif_tx_lock | 抢网络设备发送锁 |
netif_addr_lock | |
get_fs_root | |
get_fs_pwd | |
nfs_mark_for_revalidate | |
exp_funnel_lock | Funnel-lock acquisition for expedited grace periods |
rcu_exp_wait_wake | Wait for the current expedited grace period to complete, and then* wake up everyone who piggybacked on the just-completed expedited* grace period. Also update all the ->exp_seq_rq counters as needed* in order to avoid counter-wrap problems. |
ptr_ring_full | |
ptr_ring_produce | Note: resize (below) nests producer lock within consumer lock, so if you* consume in interrupt or BH context, you must disable interrupts/BH when* calling this. |
ptr_ring_empty | |
ptr_ring_consume | Note: resize (below) nests producer lock within consumer lock, so if you* call this in interrupt or BH context, you must disable interrupts/BH when* producing. |
ptr_ring_consume_batched | |
ptr_ring_unconsume | Return entries into ring |
ptr_ring_resize | Note: producer lock is nested within consumer lock, so if you* resize you must make sure all uses nest correctly.* In particular if you consume ring in interrupt or BH context, you must* disable interrupts/BH when doing so. |
ptr_ring_resize_multiple | Note: producer lock is nested within consumer lock, so if you* resize you must make sure all uses nest correctly.* In particular if you consume ring in interrupt or BH context, you must* disable interrupts/BH when doing so. |
ipc_lock_object |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |