Function report

Linux Kernel

v5.5.9

Brick Technologies Co., Ltd

Source Code:mm\readahead.c Create Date:2022-07-28 14:12:12
Last Modify:2020-03-17 21:13:07 Copyright©Brick
home page Tree
Annotation kernel can get tool activityDownload SCCTChinese

Name:A minimal readahead algorithm for trivial sequential/random reads.

Proto:static unsigned long ondemand_readahead(struct address_space *mapping, struct file_ra_state *ra, struct file *filp, bool hit_readahead_marker, unsigned long offset, unsigned long req_size)

Type:unsigned long

Parameter:

TypeParameterName
struct address_space *mapping
struct file_ra_state *ra
struct file *filp
boolhit_readahead_marker
unsigned longoffset
unsigned longreq_size
387  bdi = inode_to_bdi(host)
388  max_pages = Maximum readahead window
396  If req_size > max_pages && max allowed IO size > max_pages Then max_pages = min - return minimum of two values of the same or compatible types*@x: first value*@y: second value(req_size, max allowed IO size )
402  If Not offset Then Go to initial_readahead
409  If offset == where readahead started + # of readahead pages - do asynchronous readahead whenthere are only # of pages ahead || offset == where readahead started + # of readahead pages Then
411  where readahead started += # of readahead pages
412  # of readahead pages = Get the previous window size, ramp it up, and* return it as the new window size.
413  do asynchronous readahead whenthere are only # of pages ahead = # of readahead pages
414  Go to readit
423  If hit_readahead_marker Then
426  _read_lock() - mark the beginning of an RCU read-side critical section* When synchronize_rcu() is invoked on one CPU while other CPUs* are within RCU read-side critical sections, then the* synchronize_rcu() is guaranteed to block until after all the other
427  start = page_cache_next_miss() - Find the next gap in the page cache.*@mapping: Mapping.*@index: Index.*@max_scan: Maximum range to search.* Search the range [index, min(index + max_scan - 1, ULONG_MAX)] for the* gap with the lowest index.
428  _read_unlock() - marks the end of an RCU read-side critical section.* In most situations, rcu_read_unlock() is immune from deadlock.* However, in kernels built with CONFIG_RCU_BOOST, rcu_read_unlock()
430  If Not start || start - offset > max_pages Then Return 0
433  where readahead started = start
434  # of readahead pages = start - offset
435  # of readahead pages += req_size
436  # of readahead pages = Get the previous window size, ramp it up, and* return it as the new window size.
437  do asynchronous readahead whenthere are only # of pages ahead = # of readahead pages
438  Go to readit
444  If req_size > max_pages Then Go to initial_readahead
452  prev_offset = Cache last read() position >> PAGE_SHIFT determines the page size
453  If offset - prev_offset <= 1UL Then Go to initial_readahead
460  If page cache context based read-ahead Then Go to readit
467  Return __do_page_cache_readahead() actually reads a chunk of disk. It allocates* the pages first, then submits them for I/O. This avoids the very bad* behaviour which would occur if page allocations are causing VM writeback.
469  initial_readahead :
470  where readahead started = offset
471  # of readahead pages = Set the initial window size, round to next power of 2 and square* for small size, x 4 for medium, and x 2 for large* for 128k (32 page) max ra* 1-8 page = 32k initial, > 8 page = 128k initial
472  do asynchronous readahead whenthere are only # of pages ahead = If # of readahead pages > req_size Then # of readahead pages - req_size Else # of readahead pages
474  readit :
481  If offset == where readahead started && # of readahead pages == do asynchronous readahead whenthere are only # of pages ahead Then
482  add_pages = Get the previous window size, ramp it up, and* return it as the new window size.
486  Else
492  Return Submit IO for the read-ahead request in file_ra_state.
Caller
NameDescribe
page_cache_sync_readaheadpage_cache_sync_readahead - generic file readahead*@mapping: address_space which holds the pagecache and I/O vectors*@ra: file_ra_state which holds the readahead state*@filp: passed on to ->readpage() and ->readpages()*@offset: start offset into @mapping,
page_cache_async_readaheadpage_cache_async_readahead - file readahead for marked pages*@mapping: address_space which holds the pagecache and I/O vectors*@ra: file_ra_state which holds the readahead state*@filp: passed on to ->readpage() and ->readpages()*@page: the page at @offset