static-keys.rst 13 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331
  1. ===========
  2. Static Keys
  3. ===========
  4. .. warning::
  5. DEPRECATED API:
  6. The use of 'struct static_key' directly, is now DEPRECATED. In addition
  7. static_key_{true,false}() is also DEPRECATED. IE DO NOT use the following::
  8. struct static_key false = STATIC_KEY_INIT_FALSE;
  9. struct static_key true = STATIC_KEY_INIT_TRUE;
  10. static_key_true()
  11. static_key_false()
  12. The updated API replacements are::
  13. DEFINE_STATIC_KEY_TRUE(key);
  14. DEFINE_STATIC_KEY_FALSE(key);
  15. DEFINE_STATIC_KEY_ARRAY_TRUE(keys, count);
  16. DEFINE_STATIC_KEY_ARRAY_FALSE(keys, count);
  17. static_branch_likely()
  18. static_branch_unlikely()
  19. Abstract
  20. ========
  21. Static keys allows the inclusion of seldom used features in
  22. performance-sensitive fast-path kernel code, via a GCC feature and a code
  23. patching technique. A quick example::
  24. DEFINE_STATIC_KEY_FALSE(key);
  25. ...
  26. if (static_branch_unlikely(&key))
  27. do unlikely code
  28. else
  29. do likely code
  30. ...
  31. static_branch_enable(&key);
  32. ...
  33. static_branch_disable(&key);
  34. ...
  35. The static_branch_unlikely() branch will be generated into the code with as little
  36. impact to the likely code path as possible.
  37. Motivation
  38. ==========
  39. Currently, tracepoints are implemented using a conditional branch. The
  40. conditional check requires checking a global variable for each tracepoint.
  41. Although the overhead of this check is small, it increases when the memory
  42. cache comes under pressure (memory cache lines for these global variables may
  43. be shared with other memory accesses). As we increase the number of tracepoints
  44. in the kernel this overhead may become more of an issue. In addition,
  45. tracepoints are often dormant (disabled) and provide no direct kernel
  46. functionality. Thus, it is highly desirable to reduce their impact as much as
  47. possible. Although tracepoints are the original motivation for this work, other
  48. kernel code paths should be able to make use of the static keys facility.
  49. Solution
  50. ========
  51. gcc (v4.5) adds a new 'asm goto' statement that allows branching to a label:
  52. https://gcc.gnu.org/ml/gcc-patches/2009-07/msg01556.html
  53. Using the 'asm goto', we can create branches that are either taken or not taken
  54. by default, without the need to check memory. Then, at run-time, we can patch
  55. the branch site to change the branch direction.
  56. For example, if we have a simple branch that is disabled by default::
  57. if (static_branch_unlikely(&key))
  58. printk("I am the true branch\n");
  59. Thus, by default the 'printk' will not be emitted. And the code generated will
  60. consist of a single atomic 'no-op' instruction (5 bytes on x86), in the
  61. straight-line code path. When the branch is 'flipped', we will patch the
  62. 'no-op' in the straight-line codepath with a 'jump' instruction to the
  63. out-of-line true branch. Thus, changing branch direction is expensive but
  64. branch selection is basically 'free'. That is the basic tradeoff of this
  65. optimization.
  66. This lowlevel patching mechanism is called 'jump label patching', and it gives
  67. the basis for the static keys facility.
  68. Static key label API, usage and examples
  69. ========================================
  70. In order to make use of this optimization you must first define a key::
  71. DEFINE_STATIC_KEY_TRUE(key);
  72. or::
  73. DEFINE_STATIC_KEY_FALSE(key);
  74. The key must be global, that is, it can't be allocated on the stack or dynamically
  75. allocated at run-time.
  76. The key is then used in code as::
  77. if (static_branch_unlikely(&key))
  78. do unlikely code
  79. else
  80. do likely code
  81. Or::
  82. if (static_branch_likely(&key))
  83. do likely code
  84. else
  85. do unlikely code
  86. Keys defined via DEFINE_STATIC_KEY_TRUE(), or DEFINE_STATIC_KEY_FALSE, may
  87. be used in either static_branch_likely() or static_branch_unlikely()
  88. statements.
  89. Branch(es) can be set true via::
  90. static_branch_enable(&key);
  91. or false via::
  92. static_branch_disable(&key);
  93. The branch(es) can then be switched via reference counts::
  94. static_branch_inc(&key);
  95. ...
  96. static_branch_dec(&key);
  97. Thus, 'static_branch_inc()' means 'make the branch true', and
  98. 'static_branch_dec()' means 'make the branch false' with appropriate
  99. reference counting. For example, if the key is initialized true, a
  100. static_branch_dec(), will switch the branch to false. And a subsequent
  101. static_branch_inc(), will change the branch back to true. Likewise, if the
  102. key is initialized false, a 'static_branch_inc()', will change the branch to
  103. true. And then a 'static_branch_dec()', will again make the branch false.
  104. The state and the reference count can be retrieved with 'static_key_enabled()'
  105. and 'static_key_count()'. In general, if you use these functions, they
  106. should be protected with the same mutex used around the enable/disable
  107. or increment/decrement function.
  108. Note that switching branches results in some locks being taken,
  109. particularly the CPU hotplug lock (in order to avoid races against
  110. CPUs being brought in the kernel while the kernel is getting
  111. patched). Calling the static key API from within a hotplug notifier is
  112. thus a sure deadlock recipe. In order to still allow use of the
  113. functionality, the following functions are provided:
  114. static_key_enable_cpuslocked()
  115. static_key_disable_cpuslocked()
  116. static_branch_enable_cpuslocked()
  117. static_branch_disable_cpuslocked()
  118. These functions are *not* general purpose, and must only be used when
  119. you really know that you're in the above context, and no other.
  120. Where an array of keys is required, it can be defined as::
  121. DEFINE_STATIC_KEY_ARRAY_TRUE(keys, count);
  122. or::
  123. DEFINE_STATIC_KEY_ARRAY_FALSE(keys, count);
  124. 4) Architecture level code patching interface, 'jump labels'
  125. There are a few functions and macros that architectures must implement in order
  126. to take advantage of this optimization. If there is no architecture support, we
  127. simply fall back to a traditional, load, test, and jump sequence. Also, the
  128. struct jump_entry table must be at least 4-byte aligned because the
  129. static_key->entry field makes use of the two least significant bits.
  130. * ``select HAVE_ARCH_JUMP_LABEL``,
  131. see: arch/x86/Kconfig
  132. * ``#define JUMP_LABEL_NOP_SIZE``,
  133. see: arch/x86/include/asm/jump_label.h
  134. * ``__always_inline bool arch_static_branch(struct static_key *key, bool branch)``,
  135. see: arch/x86/include/asm/jump_label.h
  136. * ``__always_inline bool arch_static_branch_jump(struct static_key *key, bool branch)``,
  137. see: arch/x86/include/asm/jump_label.h
  138. * ``void arch_jump_label_transform(struct jump_entry *entry, enum jump_label_type type)``,
  139. see: arch/x86/kernel/jump_label.c
  140. * ``__init_or_module void arch_jump_label_transform_static(struct jump_entry *entry, enum jump_label_type type)``,
  141. see: arch/x86/kernel/jump_label.c
  142. * ``struct jump_entry``,
  143. see: arch/x86/include/asm/jump_label.h
  144. 5) Static keys / jump label analysis, results (x86_64):
  145. As an example, let's add the following branch to 'getppid()', such that the
  146. system call now looks like::
  147. SYSCALL_DEFINE0(getppid)
  148. {
  149. int pid;
  150. + if (static_branch_unlikely(&key))
  151. + printk("I am the true branch\n");
  152. rcu_read_lock();
  153. pid = task_tgid_vnr(rcu_dereference(current->real_parent));
  154. rcu_read_unlock();
  155. return pid;
  156. }
  157. The resulting instructions with jump labels generated by GCC is::
  158. ffffffff81044290 <sys_getppid>:
  159. ffffffff81044290: 55 push %rbp
  160. ffffffff81044291: 48 89 e5 mov %rsp,%rbp
  161. ffffffff81044294: e9 00 00 00 00 jmpq ffffffff81044299 <sys_getppid+0x9>
  162. ffffffff81044299: 65 48 8b 04 25 c0 b6 mov %gs:0xb6c0,%rax
  163. ffffffff810442a0: 00 00
  164. ffffffff810442a2: 48 8b 80 80 02 00 00 mov 0x280(%rax),%rax
  165. ffffffff810442a9: 48 8b 80 b0 02 00 00 mov 0x2b0(%rax),%rax
  166. ffffffff810442b0: 48 8b b8 e8 02 00 00 mov 0x2e8(%rax),%rdi
  167. ffffffff810442b7: e8 f4 d9 00 00 callq ffffffff81051cb0 <pid_vnr>
  168. ffffffff810442bc: 5d pop %rbp
  169. ffffffff810442bd: 48 98 cltq
  170. ffffffff810442bf: c3 retq
  171. ffffffff810442c0: 48 c7 c7 e3 54 98 81 mov $0xffffffff819854e3,%rdi
  172. ffffffff810442c7: 31 c0 xor %eax,%eax
  173. ffffffff810442c9: e8 71 13 6d 00 callq ffffffff8171563f <printk>
  174. ffffffff810442ce: eb c9 jmp ffffffff81044299 <sys_getppid+0x9>
  175. Without the jump label optimization it looks like::
  176. ffffffff810441f0 <sys_getppid>:
  177. ffffffff810441f0: 8b 05 8a 52 d8 00 mov 0xd8528a(%rip),%eax # ffffffff81dc9480 <key>
  178. ffffffff810441f6: 55 push %rbp
  179. ffffffff810441f7: 48 89 e5 mov %rsp,%rbp
  180. ffffffff810441fa: 85 c0 test %eax,%eax
  181. ffffffff810441fc: 75 27 jne ffffffff81044225 <sys_getppid+0x35>
  182. ffffffff810441fe: 65 48 8b 04 25 c0 b6 mov %gs:0xb6c0,%rax
  183. ffffffff81044205: 00 00
  184. ffffffff81044207: 48 8b 80 80 02 00 00 mov 0x280(%rax),%rax
  185. ffffffff8104420e: 48 8b 80 b0 02 00 00 mov 0x2b0(%rax),%rax
  186. ffffffff81044215: 48 8b b8 e8 02 00 00 mov 0x2e8(%rax),%rdi
  187. ffffffff8104421c: e8 2f da 00 00 callq ffffffff81051c50 <pid_vnr>
  188. ffffffff81044221: 5d pop %rbp
  189. ffffffff81044222: 48 98 cltq
  190. ffffffff81044224: c3 retq
  191. ffffffff81044225: 48 c7 c7 13 53 98 81 mov $0xffffffff81985313,%rdi
  192. ffffffff8104422c: 31 c0 xor %eax,%eax
  193. ffffffff8104422e: e8 60 0f 6d 00 callq ffffffff81715193 <printk>
  194. ffffffff81044233: eb c9 jmp ffffffff810441fe <sys_getppid+0xe>
  195. ffffffff81044235: 66 66 2e 0f 1f 84 00 data32 nopw %cs:0x0(%rax,%rax,1)
  196. ffffffff8104423c: 00 00 00 00
  197. Thus, the disable jump label case adds a 'mov', 'test' and 'jne' instruction
  198. vs. the jump label case just has a 'no-op' or 'jmp 0'. (The jmp 0, is patched
  199. to a 5 byte atomic no-op instruction at boot-time.) Thus, the disabled jump
  200. label case adds::
  201. 6 (mov) + 2 (test) + 2 (jne) = 10 - 5 (5 byte jump 0) = 5 addition bytes.
  202. If we then include the padding bytes, the jump label code saves, 16 total bytes
  203. of instruction memory for this small function. In this case the non-jump label
  204. function is 80 bytes long. Thus, we have saved 20% of the instruction
  205. footprint. We can in fact improve this even further, since the 5-byte no-op
  206. really can be a 2-byte no-op since we can reach the branch with a 2-byte jmp.
  207. However, we have not yet implemented optimal no-op sizes (they are currently
  208. hard-coded).
  209. Since there are a number of static key API uses in the scheduler paths,
  210. 'pipe-test' (also known as 'perf bench sched pipe') can be used to show the
  211. performance improvement. Testing done on 3.3.0-rc2:
  212. jump label disabled::
  213. Performance counter stats for 'bash -c /tmp/pipe-test' (50 runs):
  214. 855.700314 task-clock # 0.534 CPUs utilized ( +- 0.11% )
  215. 200,003 context-switches # 0.234 M/sec ( +- 0.00% )
  216. 0 CPU-migrations # 0.000 M/sec ( +- 39.58% )
  217. 487 page-faults # 0.001 M/sec ( +- 0.02% )
  218. 1,474,374,262 cycles # 1.723 GHz ( +- 0.17% )
  219. <not supported> stalled-cycles-frontend
  220. <not supported> stalled-cycles-backend
  221. 1,178,049,567 instructions # 0.80 insns per cycle ( +- 0.06% )
  222. 208,368,926 branches # 243.507 M/sec ( +- 0.06% )
  223. 5,569,188 branch-misses # 2.67% of all branches ( +- 0.54% )
  224. 1.601607384 seconds time elapsed ( +- 0.07% )
  225. jump label enabled::
  226. Performance counter stats for 'bash -c /tmp/pipe-test' (50 runs):
  227. 841.043185 task-clock # 0.533 CPUs utilized ( +- 0.12% )
  228. 200,004 context-switches # 0.238 M/sec ( +- 0.00% )
  229. 0 CPU-migrations # 0.000 M/sec ( +- 40.87% )
  230. 487 page-faults # 0.001 M/sec ( +- 0.05% )
  231. 1,432,559,428 cycles # 1.703 GHz ( +- 0.18% )
  232. <not supported> stalled-cycles-frontend
  233. <not supported> stalled-cycles-backend
  234. 1,175,363,994 instructions # 0.82 insns per cycle ( +- 0.04% )
  235. 206,859,359 branches # 245.956 M/sec ( +- 0.04% )
  236. 4,884,119 branch-misses # 2.36% of all branches ( +- 0.85% )
  237. 1.579384366 seconds time elapsed
  238. The percentage of saved branches is .7%, and we've saved 12% on
  239. 'branch-misses'. This is where we would expect to get the most savings, since
  240. this optimization is about reducing the number of branches. In addition, we've
  241. saved .2% on instructions, and 2.8% on cycles and 1.4% on elapsed time.