lockup-watchdogs.rst 4.1 KB

1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283
  1. ===============================================================
  2. Softlockup detector and hardlockup detector (aka nmi_watchdog)
  3. ===============================================================
  4. The Linux kernel can act as a watchdog to detect both soft and hard
  5. lockups.
  6. A 'softlockup' is defined as a bug that causes the kernel to loop in
  7. kernel mode for more than 20 seconds (see "Implementation" below for
  8. details), without giving other tasks a chance to run. The current
  9. stack trace is displayed upon detection and, by default, the system
  10. will stay locked up. Alternatively, the kernel can be configured to
  11. panic; a sysctl, "kernel.softlockup_panic", a kernel parameter,
  12. "softlockup_panic" (see "Documentation/admin-guide/kernel-parameters.rst" for
  13. details), and a compile option, "BOOTPARAM_SOFTLOCKUP_PANIC", are
  14. provided for this.
  15. A 'hardlockup' is defined as a bug that causes the CPU to loop in
  16. kernel mode for more than 10 seconds (see "Implementation" below for
  17. details), without letting other interrupts have a chance to run.
  18. Similarly to the softlockup case, the current stack trace is displayed
  19. upon detection and the system will stay locked up unless the default
  20. behavior is changed, which can be done through a sysctl,
  21. 'hardlockup_panic', a compile time knob, "BOOTPARAM_HARDLOCKUP_PANIC",
  22. and a kernel parameter, "nmi_watchdog"
  23. (see "Documentation/admin-guide/kernel-parameters.rst" for details).
  24. The panic option can be used in combination with panic_timeout (this
  25. timeout is set through the confusingly named "kernel.panic" sysctl),
  26. to cause the system to reboot automatically after a specified amount
  27. of time.
  28. Implementation
  29. ==============
  30. The soft and hard lockup detectors are built on top of the hrtimer and
  31. perf subsystems, respectively. A direct consequence of this is that,
  32. in principle, they should work in any architecture where these
  33. subsystems are present.
  34. A periodic hrtimer runs to generate interrupts and kick the watchdog
  35. task. An NMI perf event is generated every "watchdog_thresh"
  36. (compile-time initialized to 10 and configurable through sysctl of the
  37. same name) seconds to check for hardlockups. If any CPU in the system
  38. does not receive any hrtimer interrupt during that time the
  39. 'hardlockup detector' (the handler for the NMI perf event) will
  40. generate a kernel warning or call panic, depending on the
  41. configuration.
  42. The watchdog task is a high priority kernel thread that updates a
  43. timestamp every time it is scheduled. If that timestamp is not updated
  44. for 2*watchdog_thresh seconds (the softlockup threshold) the
  45. 'softlockup detector' (coded inside the hrtimer callback function)
  46. will dump useful debug information to the system log, after which it
  47. will call panic if it was instructed to do so or resume execution of
  48. other kernel code.
  49. The period of the hrtimer is 2*watchdog_thresh/5, which means it has
  50. two or three chances to generate an interrupt before the hardlockup
  51. detector kicks in.
  52. As explained above, a kernel knob is provided that allows
  53. administrators to configure the period of the hrtimer and the perf
  54. event. The right value for a particular environment is a trade-off
  55. between fast response to lockups and detection overhead.
  56. By default, the watchdog runs on all online cores. However, on a
  57. kernel configured with NO_HZ_FULL, by default the watchdog runs only
  58. on the housekeeping cores, not the cores specified in the "nohz_full"
  59. boot argument. If we allowed the watchdog to run by default on
  60. the "nohz_full" cores, we would have to run timer ticks to activate
  61. the scheduler, which would prevent the "nohz_full" functionality
  62. from protecting the user code on those cores from the kernel.
  63. Of course, disabling it by default on the nohz_full cores means that
  64. when those cores do enter the kernel, by default we will not be
  65. able to detect if they lock up. However, allowing the watchdog
  66. to continue to run on the housekeeping (non-tickless) cores means
  67. that we will continue to detect lockups properly on those cores.
  68. In either case, the set of cores excluded from running the watchdog
  69. may be adjusted via the kernel.watchdog_cpumask sysctl. For
  70. nohz_full cores, this may be useful for debugging a case where the
  71. kernel seems to be hanging on the nohz_full cores.