locktypes.rst 17 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537
  1. .. SPDX-License-Identifier: GPL-2.0
  2. .. _kernel_hacking_locktypes:
  3. ==========================
  4. Lock types and their rules
  5. ==========================
  6. Introduction
  7. ============
  8. The kernel provides a variety of locking primitives which can be divided
  9. into three categories:
  10. - Sleeping locks
  11. - CPU local locks
  12. - Spinning locks
  13. This document conceptually describes these lock types and provides rules
  14. for their nesting, including the rules for use under PREEMPT_RT.
  15. Lock categories
  16. ===============
  17. Sleeping locks
  18. --------------
  19. Sleeping locks can only be acquired in preemptible task context.
  20. Although implementations allow try_lock() from other contexts, it is
  21. necessary to carefully evaluate the safety of unlock() as well as of
  22. try_lock(). Furthermore, it is also necessary to evaluate the debugging
  23. versions of these primitives. In short, don't acquire sleeping locks from
  24. other contexts unless there is no other option.
  25. Sleeping lock types:
  26. - mutex
  27. - rt_mutex
  28. - semaphore
  29. - rw_semaphore
  30. - ww_mutex
  31. - percpu_rw_semaphore
  32. On PREEMPT_RT kernels, these lock types are converted to sleeping locks:
  33. - local_lock
  34. - spinlock_t
  35. - rwlock_t
  36. CPU local locks
  37. ---------------
  38. - local_lock
  39. On non-PREEMPT_RT kernels, local_lock functions are wrappers around
  40. preemption and interrupt disabling primitives. Contrary to other locking
  41. mechanisms, disabling preemption or interrupts are pure CPU local
  42. concurrency control mechanisms and not suited for inter-CPU concurrency
  43. control.
  44. Spinning locks
  45. --------------
  46. - raw_spinlock_t
  47. - bit spinlocks
  48. On non-PREEMPT_RT kernels, these lock types are also spinning locks:
  49. - spinlock_t
  50. - rwlock_t
  51. Spinning locks implicitly disable preemption and the lock / unlock functions
  52. can have suffixes which apply further protections:
  53. =================== ====================================================
  54. _bh() Disable / enable bottom halves (soft interrupts)
  55. _irq() Disable / enable interrupts
  56. _irqsave/restore() Save and disable / restore interrupt disabled state
  57. =================== ====================================================
  58. Owner semantics
  59. ===============
  60. The aforementioned lock types except semaphores have strict owner
  61. semantics:
  62. The context (task) that acquired the lock must release it.
  63. rw_semaphores have a special interface which allows non-owner release for
  64. readers.
  65. rtmutex
  66. =======
  67. RT-mutexes are mutexes with support for priority inheritance (PI).
  68. PI has limitations on non-PREEMPT_RT kernels due to preemption and
  69. interrupt disabled sections.
  70. PI clearly cannot preempt preemption-disabled or interrupt-disabled
  71. regions of code, even on PREEMPT_RT kernels. Instead, PREEMPT_RT kernels
  72. execute most such regions of code in preemptible task context, especially
  73. interrupt handlers and soft interrupts. This conversion allows spinlock_t
  74. and rwlock_t to be implemented via RT-mutexes.
  75. semaphore
  76. =========
  77. semaphore is a counting semaphore implementation.
  78. Semaphores are often used for both serialization and waiting, but new use
  79. cases should instead use separate serialization and wait mechanisms, such
  80. as mutexes and completions.
  81. semaphores and PREEMPT_RT
  82. ----------------------------
  83. PREEMPT_RT does not change the semaphore implementation because counting
  84. semaphores have no concept of owners, thus preventing PREEMPT_RT from
  85. providing priority inheritance for semaphores. After all, an unknown
  86. owner cannot be boosted. As a consequence, blocking on semaphores can
  87. result in priority inversion.
  88. rw_semaphore
  89. ============
  90. rw_semaphore is a multiple readers and single writer lock mechanism.
  91. On non-PREEMPT_RT kernels the implementation is fair, thus preventing
  92. writer starvation.
  93. rw_semaphore complies by default with the strict owner semantics, but there
  94. exist special-purpose interfaces that allow non-owner release for readers.
  95. These interfaces work independent of the kernel configuration.
  96. rw_semaphore and PREEMPT_RT
  97. ---------------------------
  98. PREEMPT_RT kernels map rw_semaphore to a separate rt_mutex-based
  99. implementation, thus changing the fairness:
  100. Because an rw_semaphore writer cannot grant its priority to multiple
  101. readers, a preempted low-priority reader will continue holding its lock,
  102. thus starving even high-priority writers. In contrast, because readers
  103. can grant their priority to a writer, a preempted low-priority writer will
  104. have its priority boosted until it releases the lock, thus preventing that
  105. writer from starving readers.
  106. local_lock
  107. ==========
  108. local_lock provides a named scope to critical sections which are protected
  109. by disabling preemption or interrupts.
  110. On non-PREEMPT_RT kernels local_lock operations map to the preemption and
  111. interrupt disabling and enabling primitives:
  112. =============================== ======================
  113. local_lock(&llock) preempt_disable()
  114. local_unlock(&llock) preempt_enable()
  115. local_lock_irq(&llock) local_irq_disable()
  116. local_unlock_irq(&llock) local_irq_enable()
  117. local_lock_irqsave(&llock) local_irq_save()
  118. local_unlock_irqrestore(&llock) local_irq_restore()
  119. =============================== ======================
  120. The named scope of local_lock has two advantages over the regular
  121. primitives:
  122. - The lock name allows static analysis and is also a clear documentation
  123. of the protection scope while the regular primitives are scopeless and
  124. opaque.
  125. - If lockdep is enabled the local_lock gains a lockmap which allows to
  126. validate the correctness of the protection. This can detect cases where
  127. e.g. a function using preempt_disable() as protection mechanism is
  128. invoked from interrupt or soft-interrupt context. Aside of that
  129. lockdep_assert_held(&llock) works as with any other locking primitive.
  130. local_lock and PREEMPT_RT
  131. -------------------------
  132. PREEMPT_RT kernels map local_lock to a per-CPU spinlock_t, thus changing
  133. semantics:
  134. - All spinlock_t changes also apply to local_lock.
  135. local_lock usage
  136. ----------------
  137. local_lock should be used in situations where disabling preemption or
  138. interrupts is the appropriate form of concurrency control to protect
  139. per-CPU data structures on a non PREEMPT_RT kernel.
  140. local_lock is not suitable to protect against preemption or interrupts on a
  141. PREEMPT_RT kernel due to the PREEMPT_RT specific spinlock_t semantics.
  142. raw_spinlock_t and spinlock_t
  143. =============================
  144. raw_spinlock_t
  145. --------------
  146. raw_spinlock_t is a strict spinning lock implementation regardless of the
  147. kernel configuration including PREEMPT_RT enabled kernels.
  148. raw_spinlock_t is a strict spinning lock implementation in all kernels,
  149. including PREEMPT_RT kernels. Use raw_spinlock_t only in real critical
  150. core code, low-level interrupt handling and places where disabling
  151. preemption or interrupts is required, for example, to safely access
  152. hardware state. raw_spinlock_t can sometimes also be used when the
  153. critical section is tiny, thus avoiding RT-mutex overhead.
  154. spinlock_t
  155. ----------
  156. The semantics of spinlock_t change with the state of PREEMPT_RT.
  157. On a non-PREEMPT_RT kernel spinlock_t is mapped to raw_spinlock_t and has
  158. exactly the same semantics.
  159. spinlock_t and PREEMPT_RT
  160. -------------------------
  161. On a PREEMPT_RT kernel spinlock_t is mapped to a separate implementation
  162. based on rt_mutex which changes the semantics:
  163. - Preemption is not disabled.
  164. - The hard interrupt related suffixes for spin_lock / spin_unlock
  165. operations (_irq, _irqsave / _irqrestore) do not affect the CPU's
  166. interrupt disabled state.
  167. - The soft interrupt related suffix (_bh()) still disables softirq
  168. handlers.
  169. Non-PREEMPT_RT kernels disable preemption to get this effect.
  170. PREEMPT_RT kernels use a per-CPU lock for serialization which keeps
  171. preemption disabled. The lock disables softirq handlers and also
  172. prevents reentrancy due to task preemption.
  173. PREEMPT_RT kernels preserve all other spinlock_t semantics:
  174. - Tasks holding a spinlock_t do not migrate. Non-PREEMPT_RT kernels
  175. avoid migration by disabling preemption. PREEMPT_RT kernels instead
  176. disable migration, which ensures that pointers to per-CPU variables
  177. remain valid even if the task is preempted.
  178. - Task state is preserved across spinlock acquisition, ensuring that the
  179. task-state rules apply to all kernel configurations. Non-PREEMPT_RT
  180. kernels leave task state untouched. However, PREEMPT_RT must change
  181. task state if the task blocks during acquisition. Therefore, it saves
  182. the current task state before blocking and the corresponding lock wakeup
  183. restores it, as shown below::
  184. task->state = TASK_INTERRUPTIBLE
  185. lock()
  186. block()
  187. task->saved_state = task->state
  188. task->state = TASK_UNINTERRUPTIBLE
  189. schedule()
  190. lock wakeup
  191. task->state = task->saved_state
  192. Other types of wakeups would normally unconditionally set the task state
  193. to RUNNING, but that does not work here because the task must remain
  194. blocked until the lock becomes available. Therefore, when a non-lock
  195. wakeup attempts to awaken a task blocked waiting for a spinlock, it
  196. instead sets the saved state to RUNNING. Then, when the lock
  197. acquisition completes, the lock wakeup sets the task state to the saved
  198. state, in this case setting it to RUNNING::
  199. task->state = TASK_INTERRUPTIBLE
  200. lock()
  201. block()
  202. task->saved_state = task->state
  203. task->state = TASK_UNINTERRUPTIBLE
  204. schedule()
  205. non lock wakeup
  206. task->saved_state = TASK_RUNNING
  207. lock wakeup
  208. task->state = task->saved_state
  209. This ensures that the real wakeup cannot be lost.
  210. rwlock_t
  211. ========
  212. rwlock_t is a multiple readers and single writer lock mechanism.
  213. Non-PREEMPT_RT kernels implement rwlock_t as a spinning lock and the
  214. suffix rules of spinlock_t apply accordingly. The implementation is fair,
  215. thus preventing writer starvation.
  216. rwlock_t and PREEMPT_RT
  217. -----------------------
  218. PREEMPT_RT kernels map rwlock_t to a separate rt_mutex-based
  219. implementation, thus changing semantics:
  220. - All the spinlock_t changes also apply to rwlock_t.
  221. - Because an rwlock_t writer cannot grant its priority to multiple
  222. readers, a preempted low-priority reader will continue holding its lock,
  223. thus starving even high-priority writers. In contrast, because readers
  224. can grant their priority to a writer, a preempted low-priority writer
  225. will have its priority boosted until it releases the lock, thus
  226. preventing that writer from starving readers.
  227. PREEMPT_RT caveats
  228. ==================
  229. local_lock on RT
  230. ----------------
  231. The mapping of local_lock to spinlock_t on PREEMPT_RT kernels has a few
  232. implications. For example, on a non-PREEMPT_RT kernel the following code
  233. sequence works as expected::
  234. local_lock_irq(&local_lock);
  235. raw_spin_lock(&lock);
  236. and is fully equivalent to::
  237. raw_spin_lock_irq(&lock);
  238. On a PREEMPT_RT kernel this code sequence breaks because local_lock_irq()
  239. is mapped to a per-CPU spinlock_t which neither disables interrupts nor
  240. preemption. The following code sequence works perfectly correct on both
  241. PREEMPT_RT and non-PREEMPT_RT kernels::
  242. local_lock_irq(&local_lock);
  243. spin_lock(&lock);
  244. Another caveat with local locks is that each local_lock has a specific
  245. protection scope. So the following substitution is wrong::
  246. func1()
  247. {
  248. local_irq_save(flags); -> local_lock_irqsave(&local_lock_1, flags);
  249. func3();
  250. local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock_1, flags);
  251. }
  252. func2()
  253. {
  254. local_irq_save(flags); -> local_lock_irqsave(&local_lock_2, flags);
  255. func3();
  256. local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock_2, flags);
  257. }
  258. func3()
  259. {
  260. lockdep_assert_irqs_disabled();
  261. access_protected_data();
  262. }
  263. On a non-PREEMPT_RT kernel this works correctly, but on a PREEMPT_RT kernel
  264. local_lock_1 and local_lock_2 are distinct and cannot serialize the callers
  265. of func3(). Also the lockdep assert will trigger on a PREEMPT_RT kernel
  266. because local_lock_irqsave() does not disable interrupts due to the
  267. PREEMPT_RT-specific semantics of spinlock_t. The correct substitution is::
  268. func1()
  269. {
  270. local_irq_save(flags); -> local_lock_irqsave(&local_lock, flags);
  271. func3();
  272. local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock, flags);
  273. }
  274. func2()
  275. {
  276. local_irq_save(flags); -> local_lock_irqsave(&local_lock, flags);
  277. func3();
  278. local_irq_restore(flags); -> local_unlock_irqrestore(&local_lock, flags);
  279. }
  280. func3()
  281. {
  282. lockdep_assert_held(&local_lock);
  283. access_protected_data();
  284. }
  285. spinlock_t and rwlock_t
  286. -----------------------
  287. The changes in spinlock_t and rwlock_t semantics on PREEMPT_RT kernels
  288. have a few implications. For example, on a non-PREEMPT_RT kernel the
  289. following code sequence works as expected::
  290. local_irq_disable();
  291. spin_lock(&lock);
  292. and is fully equivalent to::
  293. spin_lock_irq(&lock);
  294. Same applies to rwlock_t and the _irqsave() suffix variants.
  295. On PREEMPT_RT kernel this code sequence breaks because RT-mutex requires a
  296. fully preemptible context. Instead, use spin_lock_irq() or
  297. spin_lock_irqsave() and their unlock counterparts. In cases where the
  298. interrupt disabling and locking must remain separate, PREEMPT_RT offers a
  299. local_lock mechanism. Acquiring the local_lock pins the task to a CPU,
  300. allowing things like per-CPU interrupt disabled locks to be acquired.
  301. However, this approach should be used only where absolutely necessary.
  302. A typical scenario is protection of per-CPU variables in thread context::
  303. struct foo *p = get_cpu_ptr(&var1);
  304. spin_lock(&p->lock);
  305. p->count += this_cpu_read(var2);
  306. This is correct code on a non-PREEMPT_RT kernel, but on a PREEMPT_RT kernel
  307. this breaks. The PREEMPT_RT-specific change of spinlock_t semantics does
  308. not allow to acquire p->lock because get_cpu_ptr() implicitly disables
  309. preemption. The following substitution works on both kernels::
  310. struct foo *p;
  311. migrate_disable();
  312. p = this_cpu_ptr(&var1);
  313. spin_lock(&p->lock);
  314. p->count += this_cpu_read(var2);
  315. migrate_disable() ensures that the task is pinned on the current CPU which
  316. in turn guarantees that the per-CPU access to var1 and var2 are staying on
  317. the same CPU while the task remains preemptible.
  318. The migrate_disable() substitution is not valid for the following
  319. scenario::
  320. func()
  321. {
  322. struct foo *p;
  323. migrate_disable();
  324. p = this_cpu_ptr(&var1);
  325. p->val = func2();
  326. This breaks because migrate_disable() does not protect against reentrancy from
  327. a preempting task. A correct substitution for this case is::
  328. func()
  329. {
  330. struct foo *p;
  331. local_lock(&foo_lock);
  332. p = this_cpu_ptr(&var1);
  333. p->val = func2();
  334. On a non-PREEMPT_RT kernel this protects against reentrancy by disabling
  335. preemption. On a PREEMPT_RT kernel this is achieved by acquiring the
  336. underlying per-CPU spinlock.
  337. raw_spinlock_t on RT
  338. --------------------
  339. Acquiring a raw_spinlock_t disables preemption and possibly also
  340. interrupts, so the critical section must avoid acquiring a regular
  341. spinlock_t or rwlock_t, for example, the critical section must avoid
  342. allocating memory. Thus, on a non-PREEMPT_RT kernel the following code
  343. works perfectly::
  344. raw_spin_lock(&lock);
  345. p = kmalloc(sizeof(*p), GFP_ATOMIC);
  346. But this code fails on PREEMPT_RT kernels because the memory allocator is
  347. fully preemptible and therefore cannot be invoked from truly atomic
  348. contexts. However, it is perfectly fine to invoke the memory allocator
  349. while holding normal non-raw spinlocks because they do not disable
  350. preemption on PREEMPT_RT kernels::
  351. spin_lock(&lock);
  352. p = kmalloc(sizeof(*p), GFP_ATOMIC);
  353. bit spinlocks
  354. -------------
  355. PREEMPT_RT cannot substitute bit spinlocks because a single bit is too
  356. small to accommodate an RT-mutex. Therefore, the semantics of bit
  357. spinlocks are preserved on PREEMPT_RT kernels, so that the raw_spinlock_t
  358. caveats also apply to bit spinlocks.
  359. Some bit spinlocks are replaced with regular spinlock_t for PREEMPT_RT
  360. using conditional (#ifdef'ed) code changes at the usage site. In contrast,
  361. usage-site changes are not needed for the spinlock_t substitution.
  362. Instead, conditionals in header files and the core locking implemementation
  363. enable the compiler to do the substitution transparently.
  364. Lock type nesting rules
  365. =======================
  366. The most basic rules are:
  367. - Lock types of the same lock category (sleeping, CPU local, spinning)
  368. can nest arbitrarily as long as they respect the general lock ordering
  369. rules to prevent deadlocks.
  370. - Sleeping lock types cannot nest inside CPU local and spinning lock types.
  371. - CPU local and spinning lock types can nest inside sleeping lock types.
  372. - Spinning lock types can nest inside all lock types
  373. These constraints apply both in PREEMPT_RT and otherwise.
  374. The fact that PREEMPT_RT changes the lock category of spinlock_t and
  375. rwlock_t from spinning to sleeping and substitutes local_lock with a
  376. per-CPU spinlock_t means that they cannot be acquired while holding a raw
  377. spinlock. This results in the following nesting ordering:
  378. 1) Sleeping locks
  379. 2) spinlock_t, rwlock_t, local_lock
  380. 3) raw_spinlock_t and bit spinlocks
  381. Lockdep will complain if these constraints are violated, both in
  382. PREEMPT_RT and otherwise.