调用者名称 | 描述 |
verbose_linfo | |
print_liveness | |
print_verifier_state | |
push_stack | |
mark_reg_known_zero | |
mark_reg_unknown | |
mark_reg_not_init | |
add_subprog | |
check_subprogs | |
mark_reg_read | Parentage chain of this register (or stack slot) should take care of all* issues like callee-saved registers, stack slot allocation time, etc. |
check_reg_arg | |
backtrack_insn | For given verifier state backtrack_insn() is called from the last insn to* the first insn. Its purpose is to compute a bitmask of registers and* stack slots that needs precision in the parent verifier state. |
__mark_chain_precision | |
check_stack_write | heck_stack_read/write functions track spill/fill of registers,* stack boundary and alignment are checked in check_mem_access() |
check_stack_read | |
check_stack_access | |
check_map_access_type | |
__check_map_access | heck read/write into map element returned by bpf_map_lookup_elem() |
check_map_access | heck read/write into a map element with possible variable offset |
__check_packet_access | |
check_packet_access | |
check_ctx_access | heck access to 'struct bpf_context' fields. Supports fixed offsets only |
check_flow_keys_access | |
check_sock_access | |
check_pkt_ptr_alignment | |
check_generic_ptr_alignment | |
check_max_stack_depth | starting from main bpf function walk all instructions of the function* and recursively walk all callees that given function can call |
check_ctx_reg | |
check_tp_buffer_access | |
check_ptr_to_btf_access | |
check_mem_access | heck whether memory at (regno + off) is accessible for t = (read | write)* if t==write, value_regno is a register which value is stored into memory* if t==read, value_regno is a register which will receive the value from memory* if t==write && |
check_xadd | |
__check_stack_boundary | |
check_stack_boundary | when register 'regno' is passed into function that will read 'access_size'* bytes from that pointer, make sure that it's within stack boundary* and all elements of stack are initialized |
process_spin_lock | Implementation details:* bpf_map_lookup returns PTR_TO_MAP_VALUE_OR_NULL* Two bpf_map_lookups (even with the same key) will have different reg->id |
check_func_arg | |
check_map_func_compatibility | |
check_func_call | |
prepare_func_exit | |
record_func_map | |
record_func_key | |
check_reference_leak | |
check_helper_call | |
check_reg_sane_offset | |
adjust_ptr_min_max_vals | Handles arithmetic on a pointer and a scalar: computes new min/max and var_off.* Caller should also handle BPF_MOV case separately.* If we return -EACCES, caller may want to try again treating pointer as a* scalar |
adjust_scalar_min_max_vals | WARNING: This function does calculations on 64-bit values, but the actual* execution may occur on 32-bit values. Therefore, things like bitshifts* need extra checks in the 32-bit case. |
adjust_reg_min_max_vals | Handles ALU ops other than BPF_END, BPF_NEG and BPF_MOV: computes new min/max* and var_off. |
check_alu_op | heck validity of 32-bit and 64-bit arithmetic operations |
check_cond_jmp_op | |
check_ld_imm | verify BPF_LD_IMM64 instruction |
check_ld_abs | verify safety of LD_ABS|LD_IND instructions:* - they can only appear in the programs where ctx == skb* - since they are wrappers of function calls, they scratch R1-R5 registers,* preserve R6-R9, and store return value into R0* Implicit input:* ctx == skb |
check_return_code | |
push_insn | , w, e - match pseudo-code above: |
check_cfg | -recursive depth-first-search to detect loops in BPF program* loop == back-edge in directed graph |
check_btf_func | |
check_btf_line | |
propagate_precision | d precise scalars in the previous equivalent state and* propagate them into the current state |
is_state_visited | |
do_check | |
check_map_prog_compatibility | |
replace_map_fd_with_map_ptr | look for pseudo eBPF instructions that access map FDs and* replace them with actual map pointers |
bpf_patch_insn_data | |
convert_ctx_accesses | vert load instructions that access fields of a context type into a* sequence of instructions that access fields of the underlying structure:* struct __sk_buff -> struct sk_buff* struct bpf_sock_ops -> struct sock |
jit_subprogs | |
fixup_bpf_calls | xup insn->imm field of bpf_call instructions* and inline eligible helpers as explicit sequence of BPF instructions* this function is called after eBPF program passed verification |
print_verification_stats | |
check_attach_btf_id | |
bpf_check | |