mds.rst 8.5 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193
  1. Microarchitectural Data Sampling (MDS) mitigation
  2. =================================================
  3. .. _mds:
  4. Overview
  5. --------
  6. Microarchitectural Data Sampling (MDS) is a family of side channel attacks
  7. on internal buffers in Intel CPUs. The variants are:
  8. - Microarchitectural Store Buffer Data Sampling (MSBDS) (CVE-2018-12126)
  9. - Microarchitectural Fill Buffer Data Sampling (MFBDS) (CVE-2018-12130)
  10. - Microarchitectural Load Port Data Sampling (MLPDS) (CVE-2018-12127)
  11. - Microarchitectural Data Sampling Uncacheable Memory (MDSUM) (CVE-2019-11091)
  12. MSBDS leaks Store Buffer Entries which can be speculatively forwarded to a
  13. dependent load (store-to-load forwarding) as an optimization. The forward
  14. can also happen to a faulting or assisting load operation for a different
  15. memory address, which can be exploited under certain conditions. Store
  16. buffers are partitioned between Hyper-Threads so cross thread forwarding is
  17. not possible. But if a thread enters or exits a sleep state the store
  18. buffer is repartitioned which can expose data from one thread to the other.
  19. MFBDS leaks Fill Buffer Entries. Fill buffers are used internally to manage
  20. L1 miss situations and to hold data which is returned or sent in response
  21. to a memory or I/O operation. Fill buffers can forward data to a load
  22. operation and also write data to the cache. When the fill buffer is
  23. deallocated it can retain the stale data of the preceding operations which
  24. can then be forwarded to a faulting or assisting load operation, which can
  25. be exploited under certain conditions. Fill buffers are shared between
  26. Hyper-Threads so cross thread leakage is possible.
  27. MLPDS leaks Load Port Data. Load ports are used to perform load operations
  28. from memory or I/O. The received data is then forwarded to the register
  29. file or a subsequent operation. In some implementations the Load Port can
  30. contain stale data from a previous operation which can be forwarded to
  31. faulting or assisting loads under certain conditions, which again can be
  32. exploited eventually. Load ports are shared between Hyper-Threads so cross
  33. thread leakage is possible.
  34. MDSUM is a special case of MSBDS, MFBDS and MLPDS. An uncacheable load from
  35. memory that takes a fault or assist can leave data in a microarchitectural
  36. structure that may later be observed using one of the same methods used by
  37. MSBDS, MFBDS or MLPDS.
  38. Exposure assumptions
  39. --------------------
  40. It is assumed that attack code resides in user space or in a guest with one
  41. exception. The rationale behind this assumption is that the code construct
  42. needed for exploiting MDS requires:
  43. - to control the load to trigger a fault or assist
  44. - to have a disclosure gadget which exposes the speculatively accessed
  45. data for consumption through a side channel.
  46. - to control the pointer through which the disclosure gadget exposes the
  47. data
  48. The existence of such a construct in the kernel cannot be excluded with
  49. 100% certainty, but the complexity involved makes it extremly unlikely.
  50. There is one exception, which is untrusted BPF. The functionality of
  51. untrusted BPF is limited, but it needs to be thoroughly investigated
  52. whether it can be used to create such a construct.
  53. Mitigation strategy
  54. -------------------
  55. All variants have the same mitigation strategy at least for the single CPU
  56. thread case (SMT off): Force the CPU to clear the affected buffers.
  57. This is achieved by using the otherwise unused and obsolete VERW
  58. instruction in combination with a microcode update. The microcode clears
  59. the affected CPU buffers when the VERW instruction is executed.
  60. For virtualization there are two ways to achieve CPU buffer
  61. clearing. Either the modified VERW instruction or via the L1D Flush
  62. command. The latter is issued when L1TF mitigation is enabled so the extra
  63. VERW can be avoided. If the CPU is not affected by L1TF then VERW needs to
  64. be issued.
  65. If the VERW instruction with the supplied segment selector argument is
  66. executed on a CPU without the microcode update there is no side effect
  67. other than a small number of pointlessly wasted CPU cycles.
  68. This does not protect against cross Hyper-Thread attacks except for MSBDS
  69. which is only exploitable cross Hyper-thread when one of the Hyper-Threads
  70. enters a C-state.
  71. The kernel provides a function to invoke the buffer clearing:
  72. mds_clear_cpu_buffers()
  73. The mitigation is invoked on kernel/userspace, hypervisor/guest and C-state
  74. (idle) transitions.
  75. As a special quirk to address virtualization scenarios where the host has
  76. the microcode updated, but the hypervisor does not (yet) expose the
  77. MD_CLEAR CPUID bit to guests, the kernel issues the VERW instruction in the
  78. hope that it might actually clear the buffers. The state is reflected
  79. accordingly.
  80. According to current knowledge additional mitigations inside the kernel
  81. itself are not required because the necessary gadgets to expose the leaked
  82. data cannot be controlled in a way which allows exploitation from malicious
  83. user space or VM guests.
  84. Kernel internal mitigation modes
  85. --------------------------------
  86. ======= ============================================================
  87. off Mitigation is disabled. Either the CPU is not affected or
  88. mds=off is supplied on the kernel command line
  89. full Mitigation is enabled. CPU is affected and MD_CLEAR is
  90. advertised in CPUID.
  91. vmwerv Mitigation is enabled. CPU is affected and MD_CLEAR is not
  92. advertised in CPUID. That is mainly for virtualization
  93. scenarios where the host has the updated microcode but the
  94. hypervisor does not expose MD_CLEAR in CPUID. It's a best
  95. effort approach without guarantee.
  96. ======= ============================================================
  97. If the CPU is affected and mds=off is not supplied on the kernel command
  98. line then the kernel selects the appropriate mitigation mode depending on
  99. the availability of the MD_CLEAR CPUID bit.
  100. Mitigation points
  101. -----------------
  102. 1. Return to user space
  103. ^^^^^^^^^^^^^^^^^^^^^^^
  104. When transitioning from kernel to user space the CPU buffers are flushed
  105. on affected CPUs when the mitigation is not disabled on the kernel
  106. command line. The migitation is enabled through the static key
  107. mds_user_clear.
  108. The mitigation is invoked in prepare_exit_to_usermode() which covers
  109. all but one of the kernel to user space transitions. The exception
  110. is when we return from a Non Maskable Interrupt (NMI), which is
  111. handled directly in do_nmi().
  112. (The reason that NMI is special is that prepare_exit_to_usermode() can
  113. enable IRQs. In NMI context, NMIs are blocked, and we don't want to
  114. enable IRQs with NMIs blocked.)
  115. 2. C-State transition
  116. ^^^^^^^^^^^^^^^^^^^^^
  117. When a CPU goes idle and enters a C-State the CPU buffers need to be
  118. cleared on affected CPUs when SMT is active. This addresses the
  119. repartitioning of the store buffer when one of the Hyper-Threads enters
  120. a C-State.
  121. When SMT is inactive, i.e. either the CPU does not support it or all
  122. sibling threads are offline CPU buffer clearing is not required.
  123. The idle clearing is enabled on CPUs which are only affected by MSBDS
  124. and not by any other MDS variant. The other MDS variants cannot be
  125. protected against cross Hyper-Thread attacks because the Fill Buffer and
  126. the Load Ports are shared. So on CPUs affected by other variants, the
  127. idle clearing would be a window dressing exercise and is therefore not
  128. activated.
  129. The invocation is controlled by the static key mds_idle_clear which is
  130. switched depending on the chosen mitigation mode and the SMT state of
  131. the system.
  132. The buffer clear is only invoked before entering the C-State to prevent
  133. that stale data from the idling CPU from spilling to the Hyper-Thread
  134. sibling after the store buffer got repartitioned and all entries are
  135. available to the non idle sibling.
  136. When coming out of idle the store buffer is partitioned again so each
  137. sibling has half of it available. The back from idle CPU could be then
  138. speculatively exposed to contents of the sibling. The buffers are
  139. flushed either on exit to user space or on VMENTER so malicious code
  140. in user space or the guest cannot speculatively access them.
  141. The mitigation is hooked into all variants of halt()/mwait(), but does
  142. not cover the legacy ACPI IO-Port mechanism because the ACPI idle driver
  143. has been superseded by the intel_idle driver around 2010 and is
  144. preferred on all affected CPUs which are expected to gain the MD_CLEAR
  145. functionality in microcode. Aside of that the IO-Port mechanism is a
  146. legacy interface which is only used on older systems which are either
  147. not affected or do not receive microcode updates anymore.