Function Report

Linux Kernel (v4.4)

Source File:include\linux\spinlock.h Create Date:2016-01-14 09:06:23
Last Modify:2016-01-11 07:01:32 Copyright©Brick
home page Tree
Annotate the kernelChinese

Function Name:spin_lock

Function:static inline __attribute__((always_inline)) void spin_lock(spinlock_t *lock)

Return Type:static inline __attribute__((always_inline)) void

Parameter:

Type Parameter NameRemarks
spinlock_t * lock

Function description:

302  raw_spin_lock
Caller
Function NameFunction description
write_seqlockLock out other writers and update the count. Acts like a normal spin_lock/unlock. Don't need preempt_disable() because that is in the spin_lock already.
read_seqlock_exclA locking reader exclusively locks out other writers and locking readers, but doesn't update the sequence number. Acts like a normal spin_lock/unlock. Don't need preempt_disable() because that is in the spin_lock already.
pmd_lock
dont_mount
inode_inc_iversion increments i_version
parent_ino
task_lockProtects->fs,->files,->mm,->group_info,->comm, keyring subscriptions and synchronises with wait4(). Also used in procfs. Also pins the final release of task.io_context. Also protects->cpuset and->cgroup.subsys[]. And->vfork_done.
__netif_tx_lock
netif_tx_lock grab network device transmit lock
netif_addr_lock
wb_domain_size_changed memory available to a wb_domain has changed
kobj_kset_joinadd the kobject to its kset's list
kobj_kset_leaveremove the kobject from its kset's list
kset_find_obj search for object in kset.
kobj_ns_type_register
kobj_ns_type_registered
kobj_ns_current_may_mount
kobj_ns_grab_current
kobj_ns_netlink
kobj_ns_initial
kobj_ns_drop
add_head
add_tail
klist_add_behind Init a klist_node and add it after an existing node
klist_add_before Init a klist_node and add it before an existing node
klist_release
klist_put
klist_remove Decrement the refcount of node and wait for it to go away.
klist_prev Ante up prev node in list.
klist_next Ante up next node in list.
_atomic_dec_and_lockThis is an implementation of the notion of"decrement a reference count, and return locked if it decremented to zero".
lockref_get Increments reference count unconditionally
lockref_get_not_zero Increments count unless the count is 0 or dead
lockref_get_or_lock Increments count unless the count is 0 or dead
lockref_put_or_lock decrements count unless count<= 1 before decrement
lockref_get_not_dead Increments count unless the ref is dead
steal_tagsTry to steal tags from a remote cpu's percpu freelist.
alloc_local_tag
percpu_ida_alloc allocate a tag
percpu_ida_free free a tag
percpu_ida_for_each_free iterate free ids of a pool
rhashtable_rehash_table
rhashtable_walk_init Initialise an iterator
rhashtable_walk_exit Free an iterator
rhashtable_walk_start Start a hash table walk
rhashtable_walk_stop Finish a hash table walk
gen_pool_add_virt add a new chunk of special memory to the pool
textsearch_register register a textsearch module
textsearch_unregister unregister a textsearch module
iommu_tbl_range_alloc
device_dma_allocations
mce_chrdev_open
mce_chrdev_release
machine_real_restart
queue_event
suspend
do_release
do_open
kvm_async_pf_task_wait
apf_task_wake_all
kvm_async_pf_task_wake
get_fs_root
get_fs_pwd
nfs_mark_for_revalidate
huge_pte_lock
mmputDecrement the use count and release all resources for an mm.
copy_fs
copy_processCreate a new process
SYSC_unshare
do_oops_enter_exitIt just happens that oops_enter() and oops_exit() are identically implemented...
__exit_signalThis function expects the tasklist_lock write-locked.
free_resource
alloc_resource
__ptrace_unlink unlink ptracee and restore its execution state
ptrace_attach
ignoring_childrenCalled with irqs disabled, returns true if childs should reap themselves.
dequeue_signalDequeue a signal and return the element to the caller, which is expected to free it.
__lock_task_sighand
call_usermodehelper_exec_asyncThis is the task which runs the usermode application
proc_cap_handler
try_to_grab_pending steal work item from worklist and disable irq
__queue_work
pool_mayday_timeout
rescuer_thread the rescuer thread function
start_flush_work
work_busy test whether a work is currently pending or running
kmalloc_parameter
maybe_kfree_parameterDoes nothing if parameter wasn't kmalloced above.
kthread_create_on_node create a kthread.
kthreadd
double_lock
__cond_resched_lock if a reschedule is pending, drop the given lock, call schedule, and on return reacquire the lock.
torture_spin_lock_write_lock
rcu_torture_timerRCU torture reader from timer handler. Dereferences rcu_torture_current, incrementing the corresponding element of the pipeline array. The counter in the element should never be greater than 1, otherwise, the RCU implementation is broken.
__mod_timer
add_timer_on start a timer on a particular CPU
__run_timers run all expired timers(if any) on this CPU.
posix_timer_add
SYSC_timer_delete
run_posix_cpu_timersThis is called from the timer interrupt handler. The irq handler has already updated our counts. We need to check if any timers fire now. Interrupts are disabled.
exit_pi_state_listThis task is holding PI mutexes at exit time=> bad. Kernel cleans up PI-state, but userspace is likely hosed.(Robust-futex cleanup is separate and might save the day for userspace.)
double_lock_hbExpress the locking dependencies for lockdep:
futex_wakeWake up waiters matching bitset queued on this futex(uaddr).
queue_lockThe key must be already stored in q->key.
unqueue_me Remove the futex_q from its futex_hash_bucket
fixup_pi_state_ownerFixup the pi_state owner with the new owner.
futex_lock_piUserspace tried a 0-> TID atomic transition of the futex value and failed
futex_unlock_piUserspace attempted a TID-> 0 atomic transition, and failed. This is the in-kernel slowpath: we look up the PI state(if any), and do the rt-mutex unlock.
futex_wait_requeue_pi Wait on uaddr and take uaddr2
cgroup_show_options
cgroup_remount
cgroup_release_agent_write
cgroup_release_agent_show
fmeter_markeventProcess any previous ticks, then bump cnt by one(times scale).
fmeter_getrateProcess any previous ticks, then return current value.
audit_receive_msg
__fsnotify_d_instantiatefsnotify_d_instantiate- instantiate a dentry for inode
untag_chunk
create_chunk
tag_chunkthe first tagged inode becomes root of tree
prune_onefinish killing struct audit_tree
trim_markedtrim the uncommitted chunks from tree
audit_remove_tree_rulecalled with audit_filter_mutex
audit_trim_trees
audit_add_tree_rulecalled with audit_filter_mutex
audit_tag_tree
evict_chunkHere comes the stuff asynchronous to auditctl operations
kgdb_register_io_module register KGDB IO module
kgdb_unregister_io_module unregister KGDB IO module
remove_event_file_dir
find_uprobeFind a uprobe corresponding to a given inode:offset Acquires uprobes_treelock
insert_uprobeAcquire uprobes_treelock. Matching uprobe already exists in rbtree; increment(access refcount) and return the matching uprobe.
delete_uprobeThere could be threads that have already hit the breakpoint. They will recheck the current insn and restart if find_uprobe() fails. See find_active_uprobe().
build_probe_listFor a given range in vma, build a list of probes that need to be inserted.
vma_has_uprobes
padata_parallel_worker
padata_do_parallel padata parallelization function
padata_get_next Get the next object that needs serialization.
padata_reorder
padata_serial_worker
padata_do_serial padata serialization function
free_pcppages_bulkFrees a number of pages from the PCP lists Assumes all pages on list are in same zone, and of same order. count is the number of pages to free.
free_one_page
rmqueue_bulkObtain a specified number of elements from the buddy allocator, all under a single hold of the lock, for efficiency. Add them to the supplied list. Returns the number of new pages which were placed at*list.
adjust_managed_page_count
domain_update_bandwidth
balance_dirty_pagesmust be called by processes which are generating dirty data
bdi_debug_stats_show
list_lru_add
list_lru_del
__list_lru_count_one
__list_lru_walk_one
shadow_lru_isolate
__pte_alloc_kernel
copy_one_ptecopy one vm_area from one task to the other. Assumes the page tables already present in the new task to be cleared in the whole range covered by this vma.
pte_unmap_samehandle_pte_fault chooses page fault handler according to an entry which was read non-atomically
do_numa_page
handle_pte_faultThese routines also need to handle stuff like marking pages dirty and/or accessed for architectures that don't do it in hardware(most RISC architectures). The early dirtying is also good on the i386.
user_shm_lock
user_shm_unlock
expand_downwardsvma is the first one with address< vma->vm_start. Have to extend vma.
anon_vma_prepare attach an anon_vma to a memory region
__page_check_addressCheck that@page is mapped at@address into@mm.
try_to_unmap_one@arg: enum ttu_flags will be passed to this argument
alloc_vmap_areaAllocate a region of KVA of the specified size and alignment, within the vstart and vend.
free_vmap_areaFree a region of KVA allocated by alloc_vmap_area
__purge_vmap_area_lazyPurges all lazily-freed vmap areas.
find_vmap_area
new_vmap_block allocates new vmap_block and occupies 2^order pages in this block. Of course pages number can't exceed VMAP_BBMAP_BITS
free_vmap_block
purge_fragmented_blocks
vb_alloc
vb_free
vm_unmap_aliases unmap outstanding lazy aliases in the vmap layer
setup_vmalloc_vm
remove_vm_area find and remove a continuous kernel virtual area
vread read vmalloc area in a safe way.
vwrite write vmalloc area in a safe way.
pcpu_get_vm_areas allocate vmalloc areas for percpu allocator
s_start
SYSC_fadvise64_64
swap_do_scheduled_discardDoing discard actually. After a cluster discard is finished, the cluster will be added to free cluster list. caller should hold si->lock.
swap_discard_work
scan_swap_map
get_swap_page
get_swap_page_of_typeThe only caller of this function is now suspend routine
swap_info_get
swap_entry_free
swap_type_ofFind the swap type that corresponds to given device(if any).
count_swap_pagesReturn either the total number of swap pages of given type, or the number of free pages of that type(depending on@free)
try_to_unuseWe completely avoid races by reading each swap page in advance, and then search for the process using it. All the necessary page table adjustments can then be made atomically.
drain_mmlistAfter a successful try_to_unuse, if no swap is now in use, we know we can empty the mmlist
_enable_swap_info
enable_swap_info
reinsert_swap_info
SYSC_swapoff
alloc_swap_info
SYSC_swapon
si_swapinfo
__swap_duplicateVerify that a swap entry is valid and increment its swap map count.
frontswap_register_opsRegister operations for frontswap
frontswap_shrinkFrontswap, like a true swap device, may unnecessarily retain pages
frontswap_curr_pagesCount and return the number of frontswap pages across all swap devices. This is exported so that backend drivers can determine current usage without reading debugfs.
__zswap_pool_empty
__zswap_param_setval must be a null-terminated string
zswap_writeback_entryAttempts to free an entry by adding a page to the swap cache, decompressing the entry data into the page, and issuing a bio write to write the page back to the swap device.
zswap_frontswap_storeattempts to compress and store an single page
zswap_frontswap_loadreturns 0 if the page was successfully decompressed return-1 on entry not found or error
zswap_frontswap_invalidate_page frees an entry in zswap
zswap_frontswap_invalidate_areafrees all zswap entries for the given swap type
hugepage_put_subpool
hugepage_subpool_get_pagesSubpool accounting for allocating and reserving pages.
hugepage_subpool_put_pagesSubpool accounting for freeing and unreserving pages.
region_addAdd the huge page range represented by[f, t) to the reserve map
region_chgExamine the existing reserve map and determine how many huge pages in the specified range[f, t) are NOT currently represented
region_abortAbort the in progress add operation. The adds_in_progress field
region_delDelete the specified range[f, t) from the reserve map. If the t parameter is LONG_MAX, this indicates that ALL regions after f should be deleted. Locate the regions which intersect[f, t) and either trim, delete or split the existing regions.
region_countCount and return the number of huge pages in the reserve map that intersect with the range[f, t).
free_huge_page
prep_new_huge_page
dissolve_free_huge_pageDissolve a given free hugepage into free buddy pages. This function does nothing for in-use(including surplus) hugepages.
__alloc_buddy_huge_pageThere are two ways to allocate a huge page: 1. When you have a VMA and an address(like a fault) 2. When you have no VMA(like when setting/proc/.../nr_hugepages)
alloc_huge_page_nodeThis allocation function is useful in the context where vma is irrelevant. E.g. soft-offlining uses this function because it only cares physical address of error page.
gather_surplus_pagesIncrease the hugetlb pool such that it can accommodate a reservation of size'delta'.
alloc_huge_page
set_max_huge_pages
nr_overcommit_hugepages_store
hugetlb_overcommit_handler
hugetlb_acct_memoryForward declaration
hugetlb_cowHugetlb_cow() should be called with page lock of the original hugepage held.
huge_add_to_page_cache
hugetlb_no_page
hugetlb_unreserve_pages
follow_huge_pmd
dequeue_hwpoisoned_huge_pageThis function is called from memory failure code. Assume the caller holds page lock of the head page.
isolate_huge_page
putback_active_hugepage
mpol_shared_policy_lookupFind shared policy intersecting idx
shared_policy_replaceReplace a policy range.
mpol_free_shared_policyFree a backing policy store on inode delete.
__mmu_notifier_releaseThis function can't run concurrently against mmu_notifier_register because mm->mm_users> 0 during mmu_notifier_register and exit_mmap runs with mm_users== 0
do_mmu_notifier_register
mmu_notifier_unregisterThis releases the mm_count pin automatically and frees the mm structure if it was the last user of it
mmu_notifier_unregister_no_releaseSame as mmu_notifier_unregister but no callback and no srcu synchronization.
unmerge_and_remove_all_rmap_items
scan_get_next_rmap_item
__ksm_enter
__ksm_exit
__drain_alien_cache
__cache_free_alien
do_drain
cache_growGrow(by 1) the number of slabs within a cache. This is called by kmem_cache_alloc() when there are no active objs left in a cache.
cache_alloc_refill
____cache_alloc_nodeA interface to enable slab creation on nodeid
cache_flusharray
get_partial_nodeTry to allocate a partial slab from a specific node.
deactivate_slabRemove the cpu slab
remove_migration_pteRestore a potential migration pte to a working pte entry
__migration_entry_waitSomething used the pte of a page under migration. We need to get to the page and wait until migration is finished. When we return from this function the fault will be retried.
do_huge_pmd_wp_page
do_huge_pmd_numa_pageNUMA hinting page fault entry point for trans huge pmds
__khugepaged_enter
__khugepaged_exit
__collapse_huge_page_copy
collapse_huge_page
khugepaged_scan_mm_slot
khugepaged_do_scan
khugepaged
mem_cgroup_under_moveA routine for checking"mem" is under move_account() or not.
mem_cgroup_oom_trylockCheck OOM-Killer is already running under our hierarchy. If someone is running, return false.
mem_cgroup_oom_unlock
mem_cgroup_mark_under_oom
mem_cgroup_unmark_under_oom
mem_cgroup_oom_notify_cb
mem_cgroup_oom_register_event
mem_cgroup_oom_unregister_event
memcg_event_wakeGets called on POLLHUP on eventfd when user closes it.
memcg_write_event_controlDO NOT USE IN NEW FILES.
mem_cgroup_css_offline
mem_cgroup_clear_mc
mem_cgroup_can_attach
vmpressure_work_fn
vmpressure Account memory pressure through scanned/reclaimed ratio
hugetlb_cgroup_css_offlineForce the hugetlb cgroup to empty the hugetlb resources by moving them to the parent cgroup.
hugetlb_cgroup_migratehugetlb_lock will make sure a parallel cgroup rmdir won't happen when we migrate hugepages
zpool_register_driver register a zpool implementation.
zpool_unregister_driver unregister a zpool implementation.
zpool_get_driverthis assumes@type is null-terminated.
zpool_create_pool Create a new zpool
zpool_destroy_pool Destroy a zpool
zbud_alloc allocates a region of a given size
zbud_free frees the allocation associated with the given handle
zbud_reclaim_page evicts allocations from a pool page and frees it
zs_malloc Allocate block of given size from pool.
zs_free
__zs_compact
cma_add_to_cma_mem_list
cma_get_entry_from_list
ipc_lock_object
ipc_addid add an ipc identifier
ipc_lock lock an ipc structure without rwsem held
sem_lockIf the request contains only one semaphore operation, and there are no complex transactions pending, lock only the semaphore involved
freearyFree a semaphore set. freeary() is called with sem_ids.rwsem locked as a writer and the spinlock for this semaphore set hold. sem_ids.rwsem remains locked on exit.
find_alloc_undo lookup(and if not present create) undo array
exit_semadd semadj values to semaphores, free undo structures.
get_ns_from_inode
mqueue_get_inode
mqueue_evict_inode
mqueue_create
mqueue_read_fileThis is routine for system read from queue file.
mqueue_flush_file
mqueue_poll_file
wq_sleepPuts current task to sleep. Caller must hold queue lock. After return lock isn't held. sr: SEND or RECV
SYSC_mq_timedsend
SYSC_mq_timedreceive
SYSC_mq_notify
SYSC_mq_getsetattr
bio_alloc_rescue
punt_bios_to_rescuer
elevator_get
load_default_elevator_modulecalled during boot to load the elevator chosen by the elevator param
elv_register
elv_unregister
elv_iosched_show
blk_flush_plug_list
ioc_clear_queue break any ioc association with the specified queue
ioc_create_icq create and link io_cq
flush_busy_ctxsProcess software queues that have been marked busy, splicing them to the for-dispatch
__blk_mq_run_hw_queueRun this hardware queue, pulling any software queues mapped to it in.
blk_mq_insert_request
blk_mq_insert_requests
blk_mq_merge_queue_io
blk_mq_hctx_cpu_offline
blk_mq_sysfs_rq_list_show
blk_mq_hw_sysfs_rq_list_show
blkg_createIf@new_blkg is%NULL, this function tries to allocate a new one as necessary using%GFP_NOWAIT.@new_blkg is always consumed on return.
blkg_destroy_all destroy all blkgs associated with a request_queue
blkcg_deactivate_policy deactivate a blkcg policy on a request_queue
inode_free_security
sb_finish_set_opts
inode_doinit_with_dentryThe inode's security attributes must be initialized before first use.
flush_unauthorized_filesDerived from fs/exec.c:flush_old_files.
aa_alloc_sid allocate a new sid for a profile
yama_relation_cleanup remove invalid entries from the relation list
yama_ptracer_add add/replace an exception for this tracer/tracee pair
generic_file_llseek_size generic llseek implementation for regular files
put_super drop a temporary reference to superblock
generic_shutdown_super common helper for->kill_sb()
sget find or create a superblock
iterate_supers call function for all active superblocks
iterate_supers_type call function for superblocks of given type
get_super get the superblock of a device
get_active_super get an active reference to the superblock of a device
user_get_super
do_emergency_remount
get_anon_bdev
free_anon_bdev
chrdev_openCalled every time a character special file is opened
cd_forget
cdev_purge
inode_add_bytes
inode_sub_bytes
inode_get_bytes
de_threadThis function makes sure the current process has its own signal table, so that flush_signal_handlers can later reset the handlers without disturbing other processes
check_unsafe_execdetermine how safe it is to execute the proposed program- the caller must hold->cred_guard_mutex to protect against PTRACE_ATTACH or seccomp thread-sync
put_pipe_info
fifo_open
do_inode_permissionWe _really_ want to just do"generic_permission()" without even looking at the inode->i_op values. So we keep a cache flag in inode->i_opflags, that says"this has not special permission function, use the fast case".
do_tmpfile
dentry_unhashThe dentry_unhash() helper will try to drop the dentry early: we should have a usage count of 1 if we're the only user of this dentry, and if that is true(possibly after pruning the dcache), then we drop the dentry now.
vfs_link create a new link
setfl
fasync_remove_entryRemove a fasync entry. If successfully removed, return positive and clear the FASYNC flag. If no entry exists, do nothing and return 0.
fasync_insert_entryInsert a new entry into the fasync list. Return the pointer to the old one if we didn't use the new one.
ioctl_fionbio
d_drop
__dentry_kill
lock_parent
fast_dputTry to do a lockless dput(), and return whether that was successful.
dget_parent
__d_find_aliasd_find_alias- grab a hashed alias of inode
d_find_alias
d_prune_aliasesTry to kill dentries associated with this inode. WARNING: you must own a reference to inode.
shrink_dentry_list
d_walk walk the dentry tree
d_set_mountedCalled by mount code to set a mountpoint and check if the mountpoint is reachable(e.g. NFS can unhash a directory dentry and then the complete subtree can become unreachable).
d_invalidate detach submounts, prune dcache, and drop
d_alloc allocate a dcache entry
d_set_fallthru Mark a dentry as falling through to a lower layer
__d_instantiate
d_instantiate fill in inode information for a dentry
d_instantiate_unique
d_instantiate_no_diralias instantiate a non-aliased dentry
d_find_any_alias find any alias for a given inode
__d_obtain_alias
__d_lookup search for a dentry(racy)
d_delete delete a dentry
d_rehash add an entry back to the hash
dentry_update_name_case update case insensitive dentry with a new name
dentry_lock_for_move
d_splice_alias splice a disconnected dentry into the tree if one exists
d_tmpfile
inode_sb_list_add add inode to the superblock list of inodes
inode_sb_list_del
__insert_inode_hash hash an inode
__remove_inode_hash remove an inode from the hash
evictFree the inode passed in, removing it from the lists it is still connected to. We remove any pages still attached to the inode and wait for any IO that is still in progress before finally destroying the inode.
evict_inodes evict all evictable inodes for a superblock
invalidate_inodes attempt to free all inodes on a superblock
inode_lru_isolateIsolate the inode from the LRU in preparation for freeing it.
find_inodeCalled with the inode lock held.
find_inode_fastis the fast path version of find_inode, see the comment at iget_locked for details.
new_inode_pseudo obtain an inode
unlock_new_inode clear the I_NEW state and wake up any waiters
iget5_locked obtain an inode from a mounted file system
iget_locked obtain an inode from a mounted file system
test_inode_iuniquesearch the inode cache for a matching inode number. If we find one, then the inode number we are trying to allocate is not unique and so we should not use it.
iunique get a unique inode number
igrab
ilookup5_nowait search for an inode in the inode cache
ilookup search for an inode in the inode cache
find_inode_nowait find an inode in the inode cache
insert_inode_locked
insert_inode_locked4
iput_finalCalled when we're dropping the last reference to an inode.
__wait_on_freeing_inodeIf we try to find an inode in the inode hash while it is being deleted, we have to wait until the filesystem completes its deletion before reporting that it isn't found
expand_fdtableExpand the file descriptor table.
expand_filesExpand files.
dup_fdAllocate a new files structure and copy contents from the passed in files structure. errorp will be valid only when the returned files_struct is NULL.
__alloc_fdallocate a file descriptor, mark it busy.
put_unused_fd
__close_fdThe same warnings as for __alloc_fd()/__fd_install() apply here...
do_close_on_exec
set_close_on_execWe only lock f_pos if we have threads or if the file might be shared with another process. In both cases we'll have an elevated file count(done either by fdget() or by fork()).
replace_fd
SYSC_dup3
iterate_fd
mnt_alloc_idallocation is serialized by namespace_sem, but we need the spinlock to serialize with freeing.
mnt_free_id
put_mountpoint
simple_xattr_getxattr GET operation for in-memory/pseudo filesystems
__simple_xattr_set
simple_xattr_listxattr LIST operation for in-memory/pseudo filesystems
simple_xattr_list_addAdds an extended attribute to the list
dcache_dir_lseek
dcache_readdirDirectory is locked and all positive dentries in it are safe, since for ramfs-type trees they can't go away without unlink() or rmdir(), both impossible due to the lock on directory.
simple_empty
simple_pin_fs
simple_release_fs
simple_transaction_get
locked_inode_to_wb_and_lock_list
inode_to_wb_and_lock_list
__inode_wait_for_writebackWait for writeback on an inode to complete. Called with i_lock held. Caller must make sure inode cannot go away when we drop i_lock.
inode_wait_for_writebackWait for writeback on an inode to complete. Caller must have inode pinned.
__writeback_single_inodeWrite out an inode and its dirty pages. Do not update the writeback list linkage. That is left to the caller. The caller is also responsible for setting I_SYNC flag and calling inode_sync_complete() to clear it.
writeback_single_inodeWrite out an inode's dirty pages. Either the caller has an active reference on the inode or the inode has I_WILL_FREE set.
writeback_sb_inodesWrite a portion of b_io inodes which belong to@sb.
writeback_inodes_wb
wb_writebackExplicit flushing or periodic writeback of"old" data.
block_dump___mark_inode_dirty
__mark_inode_dirty internal function
wait_sb_inodesThe@s_sync_lock is used to serialise concurrent sync operations to avoid lock contention problems with concurrent wait_sb_inodes() calls
vfs_fsync_range helper to sync a range of data& metadata to disk
fsstack_copy_inode_sizedoes _NOT_ require i_mutex to be held.
set_fs_rootReplace the fs->{rootmnt,root} with{mnt,dentry}. Put the old values. It can block.
set_fs_pwdReplace the fs->{pwdmnt,pwd} with{mnt,dentry}. Put the old values. It can block.
chroot_fs_refs
exit_fs
copy_fs_struct
unshare_fs_struct
pin_remove
pin_insert_group
__find_get_block_slowVarious filesystems appear to want __find_get_block to be non-blocking. But it's the page lock which protects the buffers. To get around this, we get exclusion from try_to_free_buffers with the blockdev mapping's private_lock.
osync_buffers_listosync is designed to support O_SYNC io. It waits synchronously for all already-submitted IO to complete, but does not queue any new writes to the disk.
mark_buffer_dirty_inode
__set_page_dirty_buffersAdd a page to the dirty page list.
fsync_buffers_listWrite out and wait upon a list of buffers.
invalidate_inode_buffersInvalidate any and all dirty buffers on a given inode. We are probably unmounting the fs, but that doesn't mean we have already done a sync(). Just drop the buffers from the inode list.
remove_inode_buffersRemove any clean buffers from the inode's buffer list. This is called when we're trying to free the inode itself. Those buffers can pin it.
grow_dev_pageCreate the page-cache page that contains the requested block.
__bforgetbforget() is like brelse(), except it discards any potentially dirty data.
create_empty_buffersWe attach and possibly dirty the buffers atomically wrt __set_page_dirty_buffers() via private_lock. try_to_free_buffers is already excluded via the page lock.
attach_nobh_buffersAttach the singly-linked list of buffers created by nobh_write_begin, to the page(converting it to circular linked list and taking care of page dirty races).
try_to_free_buffers
bdev_write_inode
bdev_evict_inode
bdget
nr_blockdev_pages
bd_acquire
bd_forgetCall when you free inode
bd_prepare_to_claim prepare to claim a block device
bd_start_claiming start claiming a block device
blkdev_get open a block device
blkdev_put
iterate_bdevs
__fsnotify_update_child_dentry_flagsGiven an inode, first check if we care what happens to our children. Inotify
fsnotify_recalc_inode_maskRecalculate the inode->i_fsnotify_mask, or the mask of all FS_* event types any notifier is interested in hearing for this inode.
fsnotify_destroy_inode_mark
fsnotify_find_inode_markgiven a group and inode, find the mark associated with that combination. if found take a reference to that mark and return it, else return NULL
fsnotify_add_inode_markAttach an initialized mark to a given inode.
fsnotify_unmount_inodes an sb is unmounting. handle any watched inodes.
fsnotify_detach_markRemove mark from inode/ vfsmount list, group list, drop inode reference if we got one.
fsnotify_free_markFree fsnotify mark. The freeing is actually happening from a kthread which first waits for srcu period end. Caller must have a reference to the mark or be protected by fsnotify_mark_srcu.
fsnotify_destroy_marks
fsnotify_add_mark_lockedAttach an initialized mark to a given group and fs object. These marks may be used for the fsnotify backend to determine which event types should be delivered to which group.
fsnotify_mark_destroy
fsnotify_recalc_vfsmount_maskRecalculate the mnt->mnt_fsnotify_mask, or the mask of all FS_* event types any notifier is interested in hearing for this mount point
fsnotify_destroy_vfsmount_mark
fsnotify_find_vfsmount_markgiven a group and vfsmount, find the mark associated with that combination. if found take a reference to that mark and return it, else return NULL
fsnotify_add_vfsmount_markAttach an initialized mark to a given group and vfsmount. These marks may be used for the fsnotify backend to determine which event types should be delivered to which groups.
dnotify_handle_eventMains fsnotify call where events are delivered to dnotify.
dnotify_flushCalled every time a file is closed. Looks first for a dnotify mark on the
fcntl_dirnotifyWhen a process calls fcntl to attach a dnotify watch to a directory it ends up here. Allocate both a mark for fsnotify to add and a dnotify_struct to be attached to the fsnotify_mark.
inotify_add_to_idr
inotify_idr_find
inotify_remove_from_idrRemove the mark from the idr(if present) and drop the reference on the mark because it was in the idr.
inotify_update_existing_watch
fanotify_mark_remove_from_mask
fanotify_mark_add_to_mask
ep_removeRemoves a"struct epitem" from the eventpoll RB tree and deallocates all the associated resources. Must be called with"mtx" held.
ep_insertMust be called with"mtx" held.
timerfd_remove_cancel
timerfd_setup_cancel
handle_userfaultThe locking rules involved in returning VM_FAULT_RETRY depending on FAULT_FLAG_ALLOW_RETRY, FAULT_FLAG_RETRY_NOWAIT and FAULT_FLAG_KILLABLE are not straightforward. The"Caution" recommendation in __lock_page_or_retry is not an understatement.
userfaultfd_release
userfaultfd_ctx_read
__wake_userfault
userfaultfd_show_fdinfo
put_aio_ring_file
aio_ring_mremap
ioctx_add_table
aio_nr_sub
ioctx_alloc Allocates and initializes an ioctx. Returns an ERR_PTR if it failed.
kill_ioctx Cancels all outstanding aio requests on an aio context. Used when the processes owning a context have all exited to encourage the rapid destruction of the kioctx.
locks_delete_block
locks_insert_blockMust be called with flc_lock held.
locks_wake_up_blocksWake up processes blocked waiting for blocker.
posix_test_lock
flock_lock_inodeTry to create a FLOCK lock on filp. We always insert new FLOCK locks after any leases, but before any posix locks.
__posix_lock_file
locks_mandatory_locked Check for an active lock
__break_lease revoke all outstanding leases on file
lease_get_mtime get the last modified time of an inode
fcntl_getlease Enquire what lease is currently active
generic_add_lease
generic_delete_lease
fcntl_setlkApply the lock described by l to an open file descriptor. This implements both the F_SETLK and F_SETLKW commands of fcntl().
fcntl_setlk64Apply the lock described by l to an open file descriptor. This implements both the F_SETLK and F_SETLKW commands of fcntl().
locks_remove_leaseThe i_flctx must be valid when calling into here
posix_unblock_lock stop waiting for a file lock
show_fd_locks
locks_start
__spin_lock_mb_cache_entry
__mb_cache_entry_release
mb_cache_shrink_scanmemory pressure callback
mb_cache_shrink_count
mb_cache_createcreate a new cache
mb_cache_shrink Removes all cache entries of a device from the cache. All cache entries currently in use cannot be freed, and thus remain in the cache. All others are freed.
mb_cache_destroy Shrinks the cache to its minimum possible size(hopefully 0 entries), and then destroys it. If this was the last mbcache, un-registers the mbcache from kernel memory management.
mb_cache_entry_alloc Allocates a new cache entry
mb_cache_entry_get Get a cache entry by device/ block number.(There can only be one entry in the cache per device and block.) Returns NULL if no such cache entry exists. The returned cache entry is locked for exclusive access("single writer").
__mb_cache_entry_find
get_cached_acl
set_cached_acl
forget_cached_acl
forget_all_cached_acls
locks_start_grace
locks_end_grace
drop_pagecache_sb
get_vfsmount_from_fd
register_quota_format
unregister_quota_format
find_quota_format
dquot_mark_dquot_dirtyMark dquot dirty in atomic manner, and return it's old dirty flag state
dquot_commitWrite dquot to disk
invalidate_dquotsInvalidate all dquots on the list. Note that this function is called after
dquot_scan_activeCall callback for every active dquot on given filesystem
dquot_writeback_dquotsWrite all dquot structures to quota files
dqcache_shrink_scan
dqputPut reference to dquot
dqgetGet reference to dquot
add_dquot_refThis routine is guarded by dqonoff_mutex mutex
remove_inode_dquot_refRemove references to dquots from inode and add dquot to list for freeing if we have the last reference to dquot
remove_dquot_ref
__dquot_initializeInitialize quota pointers in inode
__dquot_dropRelease all quotas referenced by inode.
inode_add_rsv_space
inode_claim_rsv_space
inode_reclaim_rsv_space
inode_sub_rsv_space
inode_get_rsv_space
__dquot_alloc_spaceThis operation can block, but only after everything is updated
dquot_alloc_inodeThis operation can block, but only after everything is updated
dquot_claim_space_nodirtyConvert in-memory reserved quotas to real consumed quotas
dquot_reclaim_space_nodirtyConvert allocated space back to in-memory reserved quotas
__dquot_free_spaceThis operation can block, but only after everything is updated
dquot_free_inodeThis operation can block, but only after everything is updated
__dquot_transferTransfer the number of inode and blocks from one diskquota to an other. On success, dquot references in transfer_to are consumed and references to original dquots that need to be released are placed there. On failure, references are kept untouched.
dquot_disableTurn quota off on a device. type==-1==> quotaoff for all types(umount)
vfs_load_quota_inodeHelper function to turn quotas on when we already have the inode of quota file and no quota information is loaded.
dquot_resumeReenable quotas on remount RW
dquot_enableMore powerful function for turning on quotas allowing setting of individual quota flags
do_get_dqblkGeneric routine for getting common part of quota structure
do_set_dqblkGeneric routine for setting common part of quota structure
dquot_get_stateGeneric routine for getting common part of quota file information
dquot_set_dqinfoGeneric routine for setting common part of quota file information
v2_write_file_infoWrite information header to quota file
qtree_write_dquotWe don't have to be afraid of deadlocks as we never have quotas on quota files...
qtree_read_dquot
close_pdeopde is locked
proc_entry_rundown
proc_reg_open
proc_reg_release
seq_show
proc_fd_link
start_unregisteringcalled under sysctl_lock, will reacquire if has to wait
sysctl_head_get
sysctl_head_put
sysctl_head_grab
sysctl_head_finish
lookup_entry
first_entry
next_entry
sysctl_is_seen
get_subdir find or create a subdir with the specified name.
sysctl_follow_link
insert_links
__register_sysctl_table register a leaf sysctl table
unregister_sysctl_table unregister a sysctl table hierarchy
sysfs_remove_dir remove an object's directory.
sysfs_do_create_link_sd
sysfs_delete_link remove symlink in object's directory.
__compat_only_sysfs_link_entry_to_kobj add a symlink to a kobject pointing to a group or an attribute
configfs_get_config_item
configfs_drop_dentryUnhashes the dentry corresponding to given configfs_dirent Called with parent inode's i_mutex held.
configfs_hash_and_remove
configfs_d_iput
configfs_new_direntAllocates a new configfs_dirent and links it to the parent configfs_dirent
configfs_create_dir create a directory for an config_item.
configfs_dirent_is_readyCheck that a directory does not belong to a directory hierarchy being attached and not validated yet.
configfs_create_link
remove_dir
configfs_attach_attrattaches attribute's configfs_dirent to the dentry corresponding to the attribute file
detach_attrs
configfs_depend_item
configfs_undepend_itemRelease the dependent linkage. This is much simpler than configfs_depend_item() because we know that that the client driver is pinned, thus the subsystem is pinned, and therefore configfs is pinned.
configfs_mkdir
configfs_rmdir
configfs_dir_close
configfs_readdir
configfs_dir_lseek
configfs_register_group creates a parent-child relation between two groups
configfs_unregister_group unregisters a child group from its parent
configfs_register_subsystem
configfs_unregister_subsystem
create_link
configfs_unlink
alloc_dcookie
free_dcookie
debugfs_remove_recursive recursively removes a directory
tracefs_remove_recursive recursively removes a directory