函数逻辑报告 |
Source Code:arch\x86\lib\memcpy_32.c |
Create Date:2022-07-27 08:23:59 |
Last Modify:2020-03-12 14:18:49 | Copyright©Brick |
首页 | 函数Tree |
注解内核,赢得工具 | 下载SCCT | English |
函数名称:memset
函数原型:__visible void *memset(void *s, int c, size_t count)
返回类型:void
参数:
类型 | 参数 | 名称 |
---|---|---|
void * | s | |
int | c | |
size_t | count |
名称 | 描述 |
---|---|
insn_init | sn_init() - initialize struct insn*@insn: &struct insn to be initialized*@kaddr: address (in kernel memory) of instruction (or copy thereof)*@x86_64: !0 for 64-bit kernel or 64-bit app |
memset_io | |
check_cpu | CPU检查 |
strscpy_pad | strscpy_pad() - Copy a C-string into a sized buffer*@dest: Where to copy the string to*@src: Where to copy the string from*@count: Size of destination buffer* Copy the string, or as much of it as fits, into the dest buffer. The |
set_intr_gate | |
mm_init | 设置内核内存分配器 |
kexec_free_initrd | |
wb_domain_init | |
sanitize_boot_params | 启动参数检查 |
write_ldt | |
extend_brk | |
e820__range_remove | Remove a range of memory from the E820 table: |
copy_thread_tls | |
flush_thread | |
user_regset_copyout_zero | These two parallel the two above, but for portions of a regset layout* that always read as all-zero or for which writes are ignored. |
fpstate_init | |
fpu__copy | |
fpstate_sanitize_xstate | When executing XSAVEOPT (or other optimized XSAVE instructions), if* a processor implementation detects that an FPU state component is still* (or is again) in its initialized state, it may clear the corresponding* bit in the header |
copy_xstate_to_kernel | Convert from kernel XSAVES compacted format to standard format and copy* to a kernel-space ptrace buffer.* It supports partial copy but pos always starts from zero. This is called* from xstateregs_get() and there we check the CPU has XSAVES. |
copy_xstate_to_user | Convert from kernel XSAVES compacted format to standard format and copy* to a user-space buffer. It supports partial copy but pos always starts from* zero. This is called from xstateregs_get() and there we check the CPU* has XSAVES. |
set_tls_desc | |
fill_user_desc | Get the current Thread-Local Storage area: |
perf_get_x86_pmu_capability | |
early_identify_cpu | Do minimum CPU detection early.* Fields really needed: vendor, cpuid_level, family, model, mask,* cache alignment.* The others are not touched to avoid unwanted side effects.* WARNING: this function is only called on the boot CPU. Don't add code |
identify_cpu | This does the hard work of actually picking apart the CPU stuff... |
cpu_init | pu_init() initializes state that is per-CPU. Some data is already* initialized (naturally) in the bootstrap process, such as the GDT* and IDT. We reload them nevertheless, this function acts as a* 'CPU state barrier', nothing should get across. |
do_clear_cpu_cap | |
mce_setup | Do initial initialization of a struct mce |
mce_reign | The Monarch's reign |
do_machine_check | The actual machine check handler. This only handles real* exceptions when something got corrupted coming in through int 18.* This is executed in NMI context not subject to normal locking rules. This* implies that most kernel services cannot be safely used |
prepare_threshold_block | |
log_and_reset_block | |
store_interrupt_enable | |
store_threshold_limit | |
apei_write_mce | |
mtrr_write | seq_file can seek but we ignore it.* Format of control line:* "base=%Lx size=%Lx type=%s" or "disable=%d" |
mtrr_ioctl | |
generic_set_mtrr | generic_set_mtrr - set variable MTRR register on the local CPU.*@reg: The register to set.*@base: The base address of the region.*@size: The size of the region. If this is 0 the region is disabled.*@type: The type of the region.* Returns nothing. |
mtrr_trim_uncached_memory | mtrr_trim_uncached_memory - trim RAM not covered by MTRRs*@end_pfn: ending page frame number* Some buggy BIOSes don't setup the MTRRs properly for systems with certain* memory configurations |
collect_cpu_info | |
collect_cpu_info_early | |
collect_cpu_info | |
free_equiv_cpu_table | |
load_microcode_amd | |
__mon_event_count | |
pseudo_lock_dev_mmap | |
init_irq_alloc_info | |
copy_irq_alloc_info | |
native_restore_boot_irq_mode | |
unlock_ExtINT_logic | This looks a bit hackish but it's about the only one way of sending* a few INTA cycles to 8259As and any associated glue logic |
mp_unregister_ioapic | |
mp_setup_entry | |
get_mn | |
crash_setup_memmap_entries | Prepare memory map for crash dump kernel |
setup_boot_parameters | |
do_sys_vm86 | |
kvmclock_init_mem | |
jailhouse_get_wallclock | |
setup_bios_corruption_check | |
branch_clear_offset | |
create_simplefb | |
__unwind_start | |
__unwind_start | |
__unwind_start | |
vmacache_flush | |
mm_alloc | Allocate and initialize an mm_struct. |
copy_process | 创建进程 |
alloc_resource | |
xdr_stream_decode_uint32_array | xdr_stream_decode_uint32_array - Decode variable length array of integers*@xdr: pointer to xdr_stream*@array: location to store the integer array or NULL*@array_size: number of elements to store* Return values:* On success, returns number of elements |
copy_siginfo_to_user32 | |
do_sigaltstack | |
do_compat_sigaltstack | |
SYSCALL_DEFINE1 | |
SYSCALL_DEFINE2 | |
SYSCALL_DEFINE2 | Only setdomainname; getdomainname can be implemented by calling* uname() |
getrusage | |
do_sysinfo | do_sysinfo - fill in sysinfo struct*@info: pointer to buffer to fill |
init_pwq | alize newly alloced @pwq which is associated with @wq and @pool |
__kthread_init_worker | |
sched_copy_attr | Mimics kernel/events/core.c perf_copy_attr(). |
__visit_domain_allocation_hell | |
cpuacct_stats_show | |
sugov_start | |
group_init | |
debug_mutex_lock_common | Must be called with lock->wait_lock held. |
debug_mutex_free_waiter | |
lockdep_reset | |
reinit_class | |
debug_rt_mutex_init_waiter | |
debug_rt_mutex_free_waiter | |
pm_qos_remove_request | pm_qos_remove_request - modifies an existing qos request*@req: handle to request list element* Will remove pm qos request from the list of constraints and* recompute the current target value for the pm_qos_class. Call this* on slow code paths. |
test_wakealarm | To test system suspend, we need a hands-off mechanism to resume the* system. RTCs wake alarms are a common self-contained mechanism. |
__get_safe_page | |
init_header | |
save_image_lzo | save_image_lzo - Save the suspend image data compressed with LZO.*@handle: Swap map handle to use for saving the image.*@snapshot: Image to read data from.*@nr_to_write: Number of pages to save. |
swsusp_write | swsusp_write - Write entire image and metadata.*@flags: flags to pass to the "boot" kernel in the image header* It is important _NOT_ to umount filesystems at this point. We want* them synced (in case something goes wrong) but we DO not want to mark |
load_image_lzo | load_image_lzo - Load compressed image data and decompress them with LZO.*@handle: Swap map handle to use for loading data.*@snapshot: Image to copy uncompressed data into.*@nr_to_read: Number of pages to load. |
swsusp_read | swsusp_read - read the hibernation image.*@flags_p: flags passed by the "frozen" kernel in the image header should* be written into this memory location |
snapshot_open | |
snapshot_ioctl | |
log_store | sert record into the buffer, discard old ones, update heads |
rcu_sync_init | _sync_init() - Initialize an rcu_sync structure*@rsp: Pointer to rcu_sync structure to be initialized |
dma_direct_alloc_pages | |
__dma_alloc_from_coherent | |
__dma_entry_alloc | |
swiotlb_update_mem_attributes | Early SWIOTLB allocation may be too early to allow an architecture to* perform the desired operations. This function allows the architecture to* call SWIOTLB when the operations are possible. It needs to be called* before the SWIOTLB memory is used. |
swiotlb_late_init_with_tbl | |
write_profile | Writing to /proc/profile resets the counters* Writing a 'profiling multiplier' value into it also re-sets the profiling* interrupt frequency, on architectures that support this. |
__hrtimer_init | |
do_timer_create | Create a POSIX.1b interval timer. |
do_timer_gettime | Get the time remaining on a POSIX.1b interval timer. |
do_timer_settime | |
do_cpu_nanosleep | |
SYSCALL_DEFINE3 | |
COMPAT_SYSCALL_DEFINE3 | |
move_module | |
kdb_walk_kallsyms | |
fill_ac | Write an accounting entry for an exiting process* The acct_process() call is the workhorse of the process* accounting system. The struct acct is built here and then written* into the accounting file. This function should only be called from |
final_note | |
kimage_load_crash_segment | |
crash_save_cpu | |
crash_prepare_elf64_headers | |
elf_read_ehdr | |
kexec_free_elf_info | kexec_free_elf_info - free memory allocated by elf_read_from_buffer |
put_compat_rusage | |
compat_get_user_cpu_mask | |
get_compat_sigevent | We currently only need the following fields from the sigevent* structure: sigev_value, sigev_signo, sig_notify and (sometimes* sigev_notify_thread_id). The others are handled in user mode.* We also assume that copying sigev_value.sival_int is sufficient |
css_task_iter_start | ss_task_iter_start - initiate task iteration*@css: the css to walk tasks of*@flags: CSS_TASK_ITER_* flags*@it: the task iterator to use* Initiate iteration through the tasks of @css |
init_and_link_css | |
map_write | |
cpu_stop_init_done | |
audit_receive_msg | |
audit_krule_to_data | Translate kernel rule representation to struct audit_rule_data. |
audit_alloc_name | |
__audit_mq_open | __audit_mq_open - record audit data for a POSIX MQ open*@oflag: open flag*@mode: mode bits*@attr: queue attributes |
__audit_mq_sendrecv | __audit_mq_sendrecv - record audit data for a POSIX MQ timed send/receive*@mqdes: MQ descriptor*@msg_len: Message length*@msg_prio: Message priority*@abs_timeout: Message timeout in absolute time |
gcov_info_reset | gcov_info_reset - reset profiling data to zero*@info: profiling data set |
gcov_info_reset | gcov_info_reset - reset profiling data to zero*@info: profiling data set |
gcov_info_reset | gcov_info_reset - reset profiling data to zero*@info: profiling data set |
__get_insn_slot | __get_insn_slot() - Find a slot on an executable page for an instruction.* We allocate an executable page if there's no room on existing ones. |
kgdb_handle_exception | kgdb_handle_exception() - main entry point from a kernel exception* Locking hierarchy:* interface locks, if any (begin_session)* kgdb lock (kgdb_active) |
gdb_serial_stub | This function performs all gdbserial command procesing |
kdb_read | kdb_read* This function reads a string of characters, terminated by* a newline, or by reaching the end of the supplied buffer,* from the current kernel debugger console device.* Parameters:* Returns:* Returns a pointer to the buffer containing the received |
kdb_defcmd | |
kdb_md_line | kdb_md - This function implements the 'md', 'md1', 'md2', 'md4',* 'md8' 'mdr' and 'mds' commands.* md|mds [ |
kdb_sysinfo | Most of this code has been lifted from kernel/timer.c::sys_sysinfo().* I cannot call that code directly from kdb, it has an unconditional* cli()/sti() and calls routines that take locks which can stop the debugger. |
kdb_register_flags | |
kdbgetsymval | kdbgetsymval - Return the address of the given symbol |
kdbnearsym | kdbnearsym - Return the name of the symbol with the nearest address* less than 'addr' |
debug_kmalloc | |
debug_kfree | |
kdb_initbptab | Initialize the breakpoint table and register breakpoint commands. |
read_actions_logged | |
write_actions_logged | |
audit_actions_logged | |
relay_alloc_buf | lay_alloc_buf - allocate a channel buffer*@buf: the buffer struct*@size: total size of the buffer* Returns a pointer to the resulting buffer, %NULL if unsuccessful. The* passed in size will get page aligned, if it isn't already. |
fill_stats | |
fill_stats_for_tgid | |
cgroupstats_user_cmd | |
clear_tsk_latency_tracing | |
clear_global_latency_tracing | |
__account_scheduler_latency | __account_scheduler_latency - record an occurred latency*@tsk - the task struct of the task hitting the latency*@usecs - the duration of the latency in microseconds*@inter - 1 if the sleep was interruptible, 0 if uninterruptible* This function is the main |
trace_iterator_reset | Reset the state of the trace_iterator so that it can read consumed data.* Normally, the trace_iterator is used for reading the data when it is not* consumed, and must retain state. |
ring_buffer_read_page | g_buffer_read_page - extract a page from the ring buffer*@buffer: buffer to extract from*@data_page: the page to use allocated from ring_buffer_alloc_read_page*@len: amount to extract*@cpu: the cpu of the buffer to extract |
trace_parser_get_init | race_parser_get_init - gets the buffer for trace parser |
allocate_cmdlines_buffer | |
trace_buffered_event_enable | race_buffered_event_enable - enable buffering events* When events are being filtered, it is quicker to use a temporary* buffer to write the event data into if there's a likely chance* that it will not be committed |
tracing_read_pipe | Consumer reader. |
tracing_map_array_clear | |
perf_trace_buf_alloc | |
perf_ftrace_function_call | |
event_hist_trigger | |
____bpf_probe_read_user | |
____bpf_probe_read_user_str | |
bpf_probe_read_kernel_common | |
bpf_probe_read_kernel_str_common | |
____bpf_perf_event_read_value | |
____bpf_perf_prog_read_value | |
init_usb_anchor | |
trace_probe_log_clear | |
bpf_prog_calc_tag | |
bpf_probe_read_kernel | |
bpf_map_charge_move | |
bpf_obj_name_cpy | dst and src must have at least BPF_OBJ_NAME_LEN number of bytes.* Return 0 on success and < 0 on error. |
identify_ramdisk_image | This routine tries to find a RAM disk image to load, and returns the* number of blocks to read for a non-compressed image, 0 if the image* is a compressed image, and -1 if an image with the right magic* numbers could not be found |
shrink_page_list | shrink_page_list() returns the number of reclaimed pages |
shrink_node | |
lruvec_init | |
wb_init | |
pcpu_alloc | pcpu_alloc - the percpu allocator*@size: size of area to allocate in bytes*@align: alignment of area (max PAGE_SIZE)*@reserved: allocate from the reserved chunk if available*@gfp: allocation flags* Allocate percpu area of @size bytes aligned at @align |
memcg_accumulate_slabinfo | |
cache_show | |
kzfree | kzfree - like kfree but zero memory*@p: object to free memory of* The memory of the object @p points to is zeroed before freed.* If @p is %NULL, kzfree() does nothing.* Note: this function zeroes the whole allocated buffer which can be a good |
do_mmap_private | set up a private mapping or an anonymous shared mapping |
do_mmap | handle mapping creation for uClinux |
init_rss_vec | |
mincore_pte_range | |
do_mincore | Do a chunk of "sys_mincore()". We've already checked* all the arguments, we hold the mmap semaphore: we should* just return the amount of info we're asked for. |
aligned_vread | small helper routine , copy contents to buf from addr.* If the page is not present, fill zero. |
vread | vread() - read vmalloc area in a safe way.*@buf: buffer for reading data*@addr: vm address.*@count: number of bytes to be read.* This function checks that addr is a valid vmalloc'ed area, and* copy data from that area to a given buffer |
show_numa_info | |
build_zonelists | Build zonelists ordered by zone and nodes within zones.* This results in conserving DMA zone[s] until all Normal memory is* exhausted, but results in overflowing to remote node while memory* may still exist in local DMA zone. |
__build_all_zonelists | |
pageset_init | |
free_reserved_area | |
memblock_double_array | |
memblock_alloc_try_nid | memblock_alloc_try_nid - allocate boot memory block*@size: size of memory block to be allocated in bytes*@align: alignment of the region and block's size*@min_addr: the lower bound of the memory region from where the allocation* is preferred (phys |
swap_cluster_schedule_discard | Add a cluster to discard list and schedule it to do discard |
swap_do_scheduled_discard | Doing discard actually. After a cluster discard is finished, the cluster* will be added to free cluster list. caller should hold si->lock. |
swap_free_cluster | |
dma_pool_alloc | dma_pool_alloc - get a block of consistent memory*@pool: dma pool that will produce the block*@mem_flags: GFP_* bitmask*@handle: pointer to dma address of block* Return: the kernel virtual address of a currently unused block, |
dma_pool_free | dma_pool_free - put block back into dma pool*@pool: the dma pool holding the block*@vaddr: virtual address of block*@dma: dma address of block* Caller promises neither device nor driver will again touch this block* unless it is first re-allocated. |
vmemmap_alloc_block_zero | |
slob_alloc | slob_alloc: entry point into the slob allocator. |
poison_page | |
slab_alloc_node | |
slab_alloc | |
___cache_free | |
kmem_cache_alloc_bulk | |
slab_free_freelist_hook | |
maybe_wipe_obj_freeptr | If the object has been wiped upon free, make sure it's fully initialized by* zeroing out freelist pointer. |
slab_alloc_node | Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)* have the fastpath folded into their functions. So no function call* overhead for requests that can be satisfied on the fastpath. |
kmem_cache_alloc_bulk | Note that interrupts must be enabled when calling this function. |
copy_msqid_to_user | |
bio_init | Users of this function have their own bio allocation. Subsequently,* they must remember to pair any call to bio_init() with bio_uninit()* when IO has completed, or when the bio is released. |
key_garbage_collector | Reaper for unused keys. |
dccp_zeroed_hdr | |
fscrypt_setup_filename | ame.c |
源代码转换工具 开放的插件接口 | X |
---|---|
支持:c/c++/esqlc/java Oracle/Informix/Mysql 插件可实现:逻辑报告 代码生成和批量转换代码 |