atomic_t.txt 7.0 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273
  1. On atomic types (atomic_t atomic64_t and atomic_long_t).
  2. The atomic type provides an interface to the architecture's means of atomic
  3. RMW operations between CPUs (atomic operations on MMIO are not supported and
  4. can lead to fatal traps on some platforms).
  5. API
  6. ---
  7. The 'full' API consists of (atomic64_ and atomic_long_ prefixes omitted for
  8. brevity):
  9. Non-RMW ops:
  10. atomic_read(), atomic_set()
  11. atomic_read_acquire(), atomic_set_release()
  12. RMW atomic operations:
  13. Arithmetic:
  14. atomic_{add,sub,inc,dec}()
  15. atomic_{add,sub,inc,dec}_return{,_relaxed,_acquire,_release}()
  16. atomic_fetch_{add,sub,inc,dec}{,_relaxed,_acquire,_release}()
  17. Bitwise:
  18. atomic_{and,or,xor,andnot}()
  19. atomic_fetch_{and,or,xor,andnot}{,_relaxed,_acquire,_release}()
  20. Swap:
  21. atomic_xchg{,_relaxed,_acquire,_release}()
  22. atomic_cmpxchg{,_relaxed,_acquire,_release}()
  23. atomic_try_cmpxchg{,_relaxed,_acquire,_release}()
  24. Reference count (but please see refcount_t):
  25. atomic_add_unless(), atomic_inc_not_zero()
  26. atomic_sub_and_test(), atomic_dec_and_test()
  27. Misc:
  28. atomic_inc_and_test(), atomic_add_negative()
  29. atomic_dec_unless_positive(), atomic_inc_unless_negative()
  30. Barriers:
  31. smp_mb__{before,after}_atomic()
  32. TYPES (signed vs unsigned)
  33. -----
  34. While atomic_t, atomic_long_t and atomic64_t use int, long and s64
  35. respectively (for hysterical raisins), the kernel uses -fno-strict-overflow
  36. (which implies -fwrapv) and defines signed overflow to behave like
  37. 2s-complement.
  38. Therefore, an explicitly unsigned variant of the atomic ops is strictly
  39. unnecessary and we can simply cast, there is no UB.
  40. There was a bug in UBSAN prior to GCC-8 that would generate UB warnings for
  41. signed types.
  42. With this we also conform to the C/C++ _Atomic behaviour and things like
  43. P1236R1.
  44. SEMANTICS
  45. ---------
  46. Non-RMW ops:
  47. The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
  48. implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
  49. smp_store_release() respectively. Therefore, if you find yourself only using
  50. the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
  51. and are doing it wrong.
  52. A note for the implementation of atomic_set{}() is that it must not break the
  53. atomicity of the RMW ops. That is:
  54. C Atomic-RMW-ops-are-atomic-WRT-atomic_set
  55. {
  56. atomic_t v = ATOMIC_INIT(1);
  57. }
  58. P0(atomic_t *v)
  59. {
  60. (void)atomic_add_unless(v, 1, 0);
  61. }
  62. P1(atomic_t *v)
  63. {
  64. atomic_set(v, 0);
  65. }
  66. exists
  67. (v=2)
  68. In this case we would expect the atomic_set() from CPU1 to either happen
  69. before the atomic_add_unless(), in which case that latter one would no-op, or
  70. _after_ in which case we'd overwrite its result. In no case is "2" a valid
  71. outcome.
  72. This is typically true on 'normal' platforms, where a regular competing STORE
  73. will invalidate a LL/SC or fail a CMPXCHG.
  74. The obvious case where this is not so is when we need to implement atomic ops
  75. with a lock:
  76. CPU0 CPU1
  77. atomic_add_unless(v, 1, 0);
  78. lock();
  79. ret = READ_ONCE(v->counter); // == 1
  80. atomic_set(v, 0);
  81. if (ret != u) WRITE_ONCE(v->counter, 0);
  82. WRITE_ONCE(v->counter, ret + 1);
  83. unlock();
  84. the typical solution is to then implement atomic_set{}() with atomic_xchg().
  85. RMW ops:
  86. These come in various forms:
  87. - plain operations without return value: atomic_{}()
  88. - operations which return the modified value: atomic_{}_return()
  89. these are limited to the arithmetic operations because those are
  90. reversible. Bitops are irreversible and therefore the modified value
  91. is of dubious utility.
  92. - operations which return the original value: atomic_fetch_{}()
  93. - swap operations: xchg(), cmpxchg() and try_cmpxchg()
  94. - misc; the special purpose operations that are commonly used and would,
  95. given the interface, normally be implemented using (try_)cmpxchg loops but
  96. are time critical and can, (typically) on LL/SC architectures, be more
  97. efficiently implemented.
  98. All these operations are SMP atomic; that is, the operations (for a single
  99. atomic variable) can be fully ordered and no intermediate state is lost or
  100. visible.
  101. ORDERING (go read memory-barriers.txt first)
  102. --------
  103. The rule of thumb:
  104. - non-RMW operations are unordered;
  105. - RMW operations that have no return value are unordered;
  106. - RMW operations that have a return value are fully ordered;
  107. - RMW operations that are conditional are unordered on FAILURE,
  108. otherwise the above rules apply.
  109. Except of course when an operation has an explicit ordering like:
  110. {}_relaxed: unordered
  111. {}_acquire: the R of the RMW (or atomic_read) is an ACQUIRE
  112. {}_release: the W of the RMW (or atomic_set) is a RELEASE
  113. Where 'unordered' is against other memory locations. Address dependencies are
  114. not defeated.
  115. Fully ordered primitives are ordered against everything prior and everything
  116. subsequent. Therefore a fully ordered primitive is like having an smp_mb()
  117. before and an smp_mb() after the primitive.
  118. The barriers:
  119. smp_mb__{before,after}_atomic()
  120. only apply to the RMW atomic ops and can be used to augment/upgrade the
  121. ordering inherent to the op. These barriers act almost like a full smp_mb():
  122. smp_mb__before_atomic() orders all earlier accesses against the RMW op
  123. itself and all accesses following it, and smp_mb__after_atomic() orders all
  124. later accesses against the RMW op and all accesses preceding it. However,
  125. accesses between the smp_mb__{before,after}_atomic() and the RMW op are not
  126. ordered, so it is advisable to place the barrier right next to the RMW atomic
  127. op whenever possible.
  128. These helper barriers exist because architectures have varying implicit
  129. ordering on their SMP atomic primitives. For example our TSO architectures
  130. provide full ordered atomics and these barriers are no-ops.
  131. NOTE: when the atomic RmW ops are fully ordered, they should also imply a
  132. compiler barrier.
  133. Thus:
  134. atomic_fetch_add();
  135. is equivalent to:
  136. smp_mb__before_atomic();
  137. atomic_fetch_add_relaxed();
  138. smp_mb__after_atomic();
  139. However the atomic_fetch_add() might be implemented more efficiently.
  140. Further, while something like:
  141. smp_mb__before_atomic();
  142. atomic_dec(&X);
  143. is a 'typical' RELEASE pattern, the barrier is strictly stronger than
  144. a RELEASE because it orders preceding instructions against both the read
  145. and write parts of the atomic_dec(), and against all following instructions
  146. as well. Similarly, something like:
  147. atomic_inc(&X);
  148. smp_mb__after_atomic();
  149. is an ACQUIRE pattern (though very much not typical), but again the barrier is
  150. strictly stronger than ACQUIRE. As illustrated:
  151. C Atomic-RMW+mb__after_atomic-is-stronger-than-acquire
  152. {
  153. }
  154. P0(int *x, atomic_t *y)
  155. {
  156. r0 = READ_ONCE(*x);
  157. smp_rmb();
  158. r1 = atomic_read(y);
  159. }
  160. P1(int *x, atomic_t *y)
  161. {
  162. atomic_inc(y);
  163. smp_mb__after_atomic();
  164. WRITE_ONCE(*x, 1);
  165. }
  166. exists
  167. (0:r0=1 /\ 0:r1=0)
  168. This should not happen; but a hypothetical atomic_inc_acquire() --
  169. (void)atomic_fetch_inc_acquire() for instance -- would allow the outcome,
  170. because it would not order the W part of the RMW against the following
  171. WRITE_ONCE. Thus:
  172. P0 P1
  173. t = LL.acq *y (0)
  174. t++;
  175. *x = 1;
  176. r0 = *x (1)
  177. RMB
  178. r1 = *y (0)
  179. SC *y, t;
  180. is allowed.