deadline-iosched.txt 2.7 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
  1. Deadline IO scheduler tunables
  2. ==============================
  3. This little file attempts to document how the deadline io scheduler works.
  4. In particular, it will clarify the meaning of the exposed tunables that may be
  5. of interest to power users.
  6. Each io queue has a set of io scheduler tunables associated with it. These
  7. tunables control how the io scheduler works. You can find these entries
  8. in:
  9. /sys/block/<device>/queue/iosched
  10. assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted,
  11. you can do so by typing:
  12. # mount none /sys -t sysfs
  13. ********************************************************************************
  14. read_expire (in ms)
  15. -----------
  16. The goal of the deadline io scheduler is to attempt to guarantee a start
  17. service time for a request. As we focus mainly on read latencies, this is
  18. tunable. When a read request first enters the io scheduler, it is assigned
  19. a deadline that is the current time + the read_expire value in units of
  20. milliseconds.
  21. write_expire (in ms)
  22. -----------
  23. Similar to read_expire mentioned above, but for writes.
  24. fifo_batch
  25. ----------
  26. When a read request expires its deadline, we must move some requests from
  27. the sorted io scheduler list to the block device dispatch queue. fifo_batch
  28. controls how many requests we move, based on the cost of each request. A
  29. request is either qualified as a seek or a stream. The io scheduler knows
  30. the last request that was serviced by the drive (or will be serviced right
  31. before this one). See seek_cost and stream_unit.
  32. write_starved (number of dispatches)
  33. -------------
  34. When we have to move requests from the io scheduler queue to the block
  35. device dispatch queue, we always give a preference to reads. However, we
  36. don't want to starve writes indefinitely either. So writes_starved controls
  37. how many times we give preference to reads over writes. When that has been
  38. done writes_starved number of times, we dispatch some writes based on the
  39. same criteria as reads.
  40. front_merges (bool)
  41. ------------
  42. Sometimes it happens that a request enters the io scheduler that is contigious
  43. with a request that is already on the queue. Either it fits in the back of that
  44. request, or it fits at the front. That is called either a back merge candidate
  45. or a front merge candidate. Due to the way files are typically laid out,
  46. back merges are much more common than front merges. For some work loads, you
  47. may even know that it is a waste of time to spend any time attempting to
  48. front merge requests. Setting front_merges to 0 disables this functionality.
  49. Front merges may still occur due to the cached last_merge hint, but since
  50. that comes at basically 0 cost we leave that on. We simply disable the
  51. rbtree front sector lookup when the io scheduler merge function is called.
  52. Nov 11 2002, Jens Axboe <axboe@suse.de>