knfsd-stats.rst 5.1 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122
  1. ============================
  2. Kernel NFS Server Statistics
  3. ============================
  4. :Authors: Greg Banks <gnb@sgi.com> - 26 Mar 2009
  5. This document describes the format and semantics of the statistics
  6. which the kernel NFS server makes available to userspace. These
  7. statistics are available in several text form pseudo files, each of
  8. which is described separately below.
  9. In most cases you don't need to know these formats, as the nfsstat(8)
  10. program from the nfs-utils distribution provides a helpful command-line
  11. interface for extracting and printing them.
  12. All the files described here are formatted as a sequence of text lines,
  13. separated by newline '\n' characters. Lines beginning with a hash
  14. '#' character are comments intended for humans and should be ignored
  15. by parsing routines. All other lines contain a sequence of fields
  16. separated by whitespace.
  17. /proc/fs/nfsd/pool_stats
  18. ========================
  19. This file is available in kernels from 2.6.30 onwards, if the
  20. /proc/fs/nfsd filesystem is mounted (it almost always should be).
  21. The first line is a comment which describes the fields present in
  22. all the other lines. The other lines present the following data as
  23. a sequence of unsigned decimal numeric fields. One line is shown
  24. for each NFS thread pool.
  25. All counters are 64 bits wide and wrap naturally. There is no way
  26. to zero these counters, instead applications should do their own
  27. rate conversion.
  28. pool
  29. The id number of the NFS thread pool to which this line applies.
  30. This number does not change.
  31. Thread pool ids are a contiguous set of small integers starting
  32. at zero. The maximum value depends on the thread pool mode, but
  33. currently cannot be larger than the number of CPUs in the system.
  34. Note that in the default case there will be a single thread pool
  35. which contains all the nfsd threads and all the CPUs in the system,
  36. and thus this file will have a single line with a pool id of "0".
  37. packets-arrived
  38. Counts how many NFS packets have arrived. More precisely, this
  39. is the number of times that the network stack has notified the
  40. sunrpc server layer that new data may be available on a transport
  41. (e.g. an NFS or UDP socket or an NFS/RDMA endpoint).
  42. Depending on the NFS workload patterns and various network stack
  43. effects (such as Large Receive Offload) which can combine packets
  44. on the wire, this may be either more or less than the number
  45. of NFS calls received (which statistic is available elsewhere).
  46. However this is a more accurate and less workload-dependent measure
  47. of how much CPU load is being placed on the sunrpc server layer
  48. due to NFS network traffic.
  49. sockets-enqueued
  50. Counts how many times an NFS transport is enqueued to wait for
  51. an nfsd thread to service it, i.e. no nfsd thread was considered
  52. available.
  53. The circumstance this statistic tracks indicates that there was NFS
  54. network-facing work to be done but it couldn't be done immediately,
  55. thus introducing a small delay in servicing NFS calls. The ideal
  56. rate of change for this counter is zero; significantly non-zero
  57. values may indicate a performance limitation.
  58. This can happen because there are too few nfsd threads in the thread
  59. pool for the NFS workload (the workload is thread-limited), in which
  60. case configuring more nfsd threads will probably improve the
  61. performance of the NFS workload.
  62. threads-woken
  63. Counts how many times an idle nfsd thread is woken to try to
  64. receive some data from an NFS transport.
  65. This statistic tracks the circumstance where incoming
  66. network-facing NFS work is being handled quickly, which is a good
  67. thing. The ideal rate of change for this counter will be close
  68. to but less than the rate of change of the packets-arrived counter.
  69. threads-timedout
  70. Counts how many times an nfsd thread triggered an idle timeout,
  71. i.e. was not woken to handle any incoming network packets for
  72. some time.
  73. This statistic counts a circumstance where there are more nfsd
  74. threads configured than can be used by the NFS workload. This is
  75. a clue that the number of nfsd threads can be reduced without
  76. affecting performance. Unfortunately, it's only a clue and not
  77. a strong indication, for a couple of reasons:
  78. - Currently the rate at which the counter is incremented is quite
  79. slow; the idle timeout is 60 minutes. Unless the NFS workload
  80. remains constant for hours at a time, this counter is unlikely
  81. to be providing information that is still useful.
  82. - It is usually a wise policy to provide some slack,
  83. i.e. configure a few more nfsds than are currently needed,
  84. to allow for future spikes in load.
  85. Note that incoming packets on NFS transports will be dealt with in
  86. one of three ways. An nfsd thread can be woken (threads-woken counts
  87. this case), or the transport can be enqueued for later attention
  88. (sockets-enqueued counts this case), or the packet can be temporarily
  89. deferred because the transport is currently being used by an nfsd
  90. thread. This last case is not very interesting and is not explicitly
  91. counted, but can be inferred from the other counters thus::
  92. packets-deferred = packets-arrived - ( sockets-enqueued + threads-woken )
  93. More
  94. ====
  95. Descriptions of the other statistics file should go here.