函数逻辑报告 |
Source Code:include\asm-generic\atomic-instrumented.h |
Create Date:2022-07-27 06:38:47 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:atomic_set
函数原型:static inline void atomic_set(atomic_t *v, int i)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
atomic_t * | v | |
int | i |
44 | kasan_check_write(v, v的长度) |
名称 | 描述 |
---|---|
rhashtable_init | hashtable_init - initialize a new hash table*@ht: hash table to be initialized*@params: configuration parameters* Initializes a new hash table based on the provided configuration* parameters |
test_rht_init | |
setup_fault_attr | setup_fault_attr() is a helper function for various __setup handlers, so it* returns 0 on error, because that is what __setup handlers do. |
sbitmap_queue_init_node | |
sbitmap_queue_update_wake_batch | |
sbq_wake_ptr | |
tboot_late_init | |
mce_start | Start of Monarch synchronization. This waits until all CPUs have* entered the exception handler and then determines if any of them* saw a fatal event that requires panic. Then it executes them* in the entry order.* TBD double check parallel CPU hotunplug |
mce_end | Synchronize between CPUs after main scanning loop.* This invokes the bulk of the Monarch processing. |
intel_init_thermal | |
microcode_reload_late | Reload microcode late on all CPUs. Wait for a sec until they* all gather together. |
kgdb_arch_handle_exception | kgdb_arch_handle_exception - Handle architecture specific GDB packets.*@e_vector: The error vector of the exception that happened.*@signo: The signal number of the exception that happened.*@err_code: The error code of the exception that happened. |
mm_init | 设置内核内存分配器 |
copy_signal | 复制信号 |
tasklet_init | |
flush_workqueue_prep_pwqs | lush_workqueue_prep_pwqs - prepare pwqs for workqueue flushing*@wq: workqueue being flushed*@flush_color: new flush color, < 0 for no-op*@work_color: new work color, < 0 for no-op* Prepare pwqs for workqueue flushing |
alloc_workqueue | |
create_nsproxy | |
cred_alloc_blank | Allocate blank credentials, such that the credentials can be filled in at a* later date without risk of ENOMEM. |
prepare_creds | prepare_creds - Prepare a new set of credentials for modification* Prepare a new set of task credentials for modification |
prepare_kernel_cred | prepare_kernel_cred - Prepare a set of credentials for a kernel service*@daemon: A userspace daemon to be used as a reference* Prepare a set of credentials for a kernel service |
cpu_check_up_prepare | If CPU has died properly, set its state to CPU_UP_PREPARE and* return success |
groups_alloc | |
sched_init | 初始化调度器数据结构并创建运行队列 |
cpupri_init | pupri_init - initialize the cpupri structure*@cp: The cpupri context* Return: -ENOMEM on memory allocation failure. |
init_defrootdomain | |
sd_init | |
membarrier_exec_mmap | |
group_init | |
psi_schedule_poll_work | Schedule polling if it's not already scheduled. It's safe to call even from* hotpath because even though kthread_queue_delayed_work takes worker->lock* spinlock that spinlock is never contended due to poll_scheduled atomic* preventing such competition. |
psi_poll_work | |
psi_trigger_destroy | |
hib_init_batch | |
crc32_threadfn | CRC32 update function that runs in its own thread. |
lzo_compress_threadfn | Compression function that runs in its own thread. |
save_image_lzo | save_image_lzo - Save the suspend image data compressed with LZO.*@handle: Swap map handle to use for saving the image.*@snapshot: Image to read data from.*@nr_to_write: Number of pages to save. |
lzo_decompress_threadfn | Deompression function that runs in its own thread. |
load_image_lzo | load_image_lzo - Load compressed image data and decompress them with LZO.*@handle: Swap map handle to use for loading data.*@snapshot: Image to copy uncompressed data into.*@nr_to_read: Number of pages to load. |
init_srcu_struct_fields | Initialize non-compile-time initialized fields, including the* associated srcu_node and srcu_data structures. The is_static* parameter is passed through to init_srcu_struct_nodes(), and* also tells us that ->sda has already been wired up to srcu_data. |
srcu_barrier | srcu_barrier - Wait until all in-flight call_srcu() callbacks complete.*@ssp: srcu_struct on which to wait for in-flight callbacks. |
rcu_torture_barrier | kthread function to drive and coordinate RCU barrier testing. |
rcu_torture_barrier_init | Initialize RCU barrier testing. |
rcu_torture_init | |
rcu_perf_init | |
rcu_barrier | _barrier - Wait until all in-flight call_rcu() callbacks complete |
futex_init | |
crash_kexec | |
init_cgroup_root | |
init_and_link_css | |
create_user_ns | Create a new user namespace, deriving the creator from the user in the* passed credentials, and replacing that user with the new root user for the* new namespace.* This is called by copy_creds(), which will finish setting the target task's* credentials. |
cpu_stop_init_done | |
set_state | |
kgdb_cpu_enter | |
kgdb_tasklet_bpt | There are times a tasklet needs to be used vs a compiled in* break point so as to cause an exception outside a kgdb I/O module,* such as is the case with kgdboe, where calling a breakpoint in the* I/O driver itself would be fatal. |
kdb_disable_nmi | |
reset_hung_task_detector | |
tracing_map_clear | racing_map_clear - Clear a tracing_map*@map: The tracing_map to clear* Resets the tracing map to a cleared or initial state |
tracing_map_create | racing_map_create - Create a lock-free map and element pool*@map_bits: The size of the map (2 ** map_bits)*@key_size: The size of the key for the map in bytes*@ops: Optional client-defined tracing_map_ops instance*@private_data: Client data associated |
alloc_retstack_tasklist | Try to assign a return stack array on FTRACE_RETSTACK_ALLOC_SIZE tasks. |
graph_init_task | |
trace_create_new_event | |
perf_mmap | |
perf_pmu_register | |
perf_output_wakeup | |
__create_xol_area | |
padata_init_pqueues | Initialize all percpu queues used by parallel workers |
padata_alloc_pd | Allocate and initialize the internal cpumask dependend resources. |
static_key_slow_inc | |
static_key_enable_cpuslocked | |
anon_vma_alloc | |
anon_vma_ctor | |
page_add_new_anon_rmap | page_add_new_anon_rmap - add pte mapping to a new anonymous page*@page: the page to add the mapping to*@vma: the vm area in which the mapping is added*@address: the user virtual address mapped*@compound: charge the page as compound or small page |
hugepage_add_new_anon_rmap | |
prep_compound_page | |
swapin_nr_pages | |
init_swap_address_space | |
__frontswap_invalidate_area | Invalidate all data from frontswap associated with all offsets for the* specified swaptype. |
prep_compound_gigantic_page | |
mpol_new | This function just creates a new policy, does some check and simple* initialization. You must invoke mpol_set_nodemask() to set nodes. |
__mpol_dup | Slow path of a mempolicy duplicate |
shared_policy_replace | Replace a policy range. |
get_huge_zero_page | |
create_object | Create the metadata (struct kmemleak_object) corresponding to an allocated* memory block and add it to the object_list and object_tree_root. |
zpool_register_driver | zpool_register_driver() - register a zpool implementation.*@driver: driver to register |
msg_init_ns | |
bio_init | Users of this function have their own bio allocation. Subsequently,* they must remember to pair any call to bio_init() with bio_uninit()* when IO has completed, or when the bio is released. |
bio_reset | _reset - reinitialize a bio*@bio: bio to reset* Description:* After calling bio_reset(), @bio will be in the same state as a freshly* allocated bio returned bio bio_alloc_bioset() - the only fields that are |
create_task_io_context | |
blk_mq_alloc_hctx | |
scale_cookie_change | We scale the qd down faster than we scale up, so we need to use this helper* to adjust the scale_cookie accordingly so we don't prematurely get* scale_cookie at DEFAULT_SCALE_COOKIE and unthrottle too much |
iolatency_clear_scaling | |
iolatency_pd_init | |
blk_iocost_init | |
kyber_init_hctx | |
key_user_lookup | Get the key quota record for a user, allocating a new record if one doesn't* already exist. |
selinux_avc_init | |
tomoyo_commit_condition | moyo_commit_condition - Commit "struct tomoyo_condition".*@entry: Pointer to "struct tomoyo_condition".* Returns pointer to "struct tomoyo_condition" on success, NULL otherwise.* This function merges duplicated entries. This function returns NULL if |
tomoyo_collect_entry | moyo_collect_entry - Try to kfree() deleted elements.* Returns nothing. |
tomoyo_get_group | moyo_get_group - Allocate memory for "struct tomoyo_path_group"/"struct tomoyo_number_group".*@param: Pointer to "struct tomoyo_acl_param".*@idx: Index number.* Returns pointer to "struct tomoyo_group" on success, NULL otherwise. |
tomoyo_get_name | moyo_get_name - Allocate permanent memory for string data.*@name: The string to store into the permernent memory.* Returns pointer to "struct tomoyo_path_info" on success, NULL otherwise. |
alloc_ns | alloc_ns - allocate, initialize and return a new namespace*@prefix: parent namespace name (MAYBE NULL)*@name: a preallocated name (NOT NULL)* Returns: refcounted namespace or NULL on failure. |
alloc_super | alloc_super - create new superblock*@type: filesystem type superblock should belong to*@flags: the mount flags*@user_ns: User namespace for the super_block* Allocates and initializes a new &struct super_block. alloc_super() |
__d_alloc | __d_alloc - allocate a dcache entry*@sb: filesystem it will belong to*@name: qstr of the name* Allocates a dentry. It returns %NULL if there is insufficient memory* available. On a success the dentry is returned. The name passed in is |
inode_init_always | de_init_always - perform inode structure initialisation*@sb: superblock inode belongs to*@inode: inode to initialise* These are initializations that need to be done on every inode* allocation as the fields are not initialised by slab allocation. |
dup_fd | Allocate a new files structure and copy contents from the* passed in files structure.* errorp will be valid only when the returned files_struct is NULL. |
alloc_mnt_ns | |
__blkdev_direct_IO | |
fsnotify_alloc_group | Create a new fsnotify_group and hold a reference for the group returned. |
ioctx_alloc | x_alloc* Allocates and initializes an ioctx. Returns an ERR_PTR if it failed. |
exit_aio | xit_aio: called when the last user of mm goes away. At this point, there is* no way for any new requests to be submited or any of the io_* syscalls to be* called on the context.* There may be outstanding kiocbs, but free_ioctx() will explicitly wait on |
SYSCALL_DEFINE1 | sys_io_destroy:* Destroy the aio_context specified. May cancel any outstanding * AIOs and block on completion. Will fail with -ENOSYS if not* implemented. May fail with -EINVAL if the context pointed to* is invalid. |
io_wq_create | |
mb_cache_entry_create | mb_cache_entry_create - create entry in cache*@cache - cache where the entry should be created*@mask - gfp mask with which the entry should be allocated*@key - key of the entry*@value - value of the entry*@reusable - is the entry reusable by others? |
zap_threads | |
iomap_page_create | |
iomap_dio_rw | map_dio_rw() always completes O_[D]SYNC writes regardless of whether the IO* is being issued as AIO or not. This allows us to optimise pure data writes* to use REQ_FUA rather than requiring generic_write_sync() to issue a* REQ_FLUSH post write |
get_empty_dquot | |
atomic_long_set | |
static_key_enable | |
static_key_disable | |
osq_lock_init | |
init_tlb_flush_pending | |
refcount_set | _set - set a refcount's value*@r: the refcount*@n: value to which the refcount will be set |
set_page_count | |
page_mapcount_reset | The atomic page->_mapcount, starts from -1: so that transitions* both from it and to it can be tracked, using atomic_inc_and_test* and atomic_add_negative(-1). |
bio_cnt_set | |
__skb_header_release | 发布引用到标题 |
init_irq_work | |
xprt_inject_disconnect | |
rq_wait_init |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |