l1tf.rst 24 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615
  1. L1TF - L1 Terminal Fault
  2. ========================
  3. L1 Terminal Fault is a hardware vulnerability which allows unprivileged
  4. speculative access to data which is available in the Level 1 Data Cache
  5. when the page table entry controlling the virtual address, which is used
  6. for the access, has the Present bit cleared or other reserved bits set.
  7. Affected processors
  8. -------------------
  9. This vulnerability affects a wide range of Intel processors. The
  10. vulnerability is not present on:
  11. - Processors from AMD, Centaur and other non Intel vendors
  12. - Older processor models, where the CPU family is < 6
  13. - A range of Intel ATOM processors (Cedarview, Cloverview, Lincroft,
  14. Penwell, Pineview, Silvermont, Airmont, Merrifield)
  15. - The Intel XEON PHI family
  16. - Intel processors which have the ARCH_CAP_RDCL_NO bit set in the
  17. IA32_ARCH_CAPABILITIES MSR. If the bit is set the CPU is not affected
  18. by the Meltdown vulnerability either. These CPUs should become
  19. available by end of 2018.
  20. Whether a processor is affected or not can be read out from the L1TF
  21. vulnerability file in sysfs. See :ref:`l1tf_sys_info`.
  22. Related CVEs
  23. ------------
  24. The following CVE entries are related to the L1TF vulnerability:
  25. ============= ================= ==============================
  26. CVE-2018-3615 L1 Terminal Fault SGX related aspects
  27. CVE-2018-3620 L1 Terminal Fault OS, SMM related aspects
  28. CVE-2018-3646 L1 Terminal Fault Virtualization related aspects
  29. ============= ================= ==============================
  30. Problem
  31. -------
  32. If an instruction accesses a virtual address for which the relevant page
  33. table entry (PTE) has the Present bit cleared or other reserved bits set,
  34. then speculative execution ignores the invalid PTE and loads the referenced
  35. data if it is present in the Level 1 Data Cache, as if the page referenced
  36. by the address bits in the PTE was still present and accessible.
  37. While this is a purely speculative mechanism and the instruction will raise
  38. a page fault when it is retired eventually, the pure act of loading the
  39. data and making it available to other speculative instructions opens up the
  40. opportunity for side channel attacks to unprivileged malicious code,
  41. similar to the Meltdown attack.
  42. While Meltdown breaks the user space to kernel space protection, L1TF
  43. allows to attack any physical memory address in the system and the attack
  44. works across all protection domains. It allows an attack of SGX and also
  45. works from inside virtual machines because the speculation bypasses the
  46. extended page table (EPT) protection mechanism.
  47. Attack scenarios
  48. ----------------
  49. 1. Malicious user space
  50. ^^^^^^^^^^^^^^^^^^^^^^^
  51. Operating Systems store arbitrary information in the address bits of a
  52. PTE which is marked non present. This allows a malicious user space
  53. application to attack the physical memory to which these PTEs resolve.
  54. In some cases user-space can maliciously influence the information
  55. encoded in the address bits of the PTE, thus making attacks more
  56. deterministic and more practical.
  57. The Linux kernel contains a mitigation for this attack vector, PTE
  58. inversion, which is permanently enabled and has no performance
  59. impact. The kernel ensures that the address bits of PTEs, which are not
  60. marked present, never point to cacheable physical memory space.
  61. A system with an up to date kernel is protected against attacks from
  62. malicious user space applications.
  63. 2. Malicious guest in a virtual machine
  64. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  65. The fact that L1TF breaks all domain protections allows malicious guest
  66. OSes, which can control the PTEs directly, and malicious guest user
  67. space applications, which run on an unprotected guest kernel lacking the
  68. PTE inversion mitigation for L1TF, to attack physical host memory.
  69. A special aspect of L1TF in the context of virtualization is symmetric
  70. multi threading (SMT). The Intel implementation of SMT is called
  71. HyperThreading. The fact that Hyperthreads on the affected processors
  72. share the L1 Data Cache (L1D) is important for this. As the flaw allows
  73. only to attack data which is present in L1D, a malicious guest running
  74. on one Hyperthread can attack the data which is brought into the L1D by
  75. the context which runs on the sibling Hyperthread of the same physical
  76. core. This context can be host OS, host user space or a different guest.
  77. If the processor does not support Extended Page Tables, the attack is
  78. only possible, when the hypervisor does not sanitize the content of the
  79. effective (shadow) page tables.
  80. While solutions exist to mitigate these attack vectors fully, these
  81. mitigations are not enabled by default in the Linux kernel because they
  82. can affect performance significantly. The kernel provides several
  83. mechanisms which can be utilized to address the problem depending on the
  84. deployment scenario. The mitigations, their protection scope and impact
  85. are described in the next sections.
  86. The default mitigations and the rationale for choosing them are explained
  87. at the end of this document. See :ref:`default_mitigations`.
  88. .. _l1tf_sys_info:
  89. L1TF system information
  90. -----------------------
  91. The Linux kernel provides a sysfs interface to enumerate the current L1TF
  92. status of the system: whether the system is vulnerable, and which
  93. mitigations are active. The relevant sysfs file is:
  94. /sys/devices/system/cpu/vulnerabilities/l1tf
  95. The possible values in this file are:
  96. =========================== ===============================
  97. 'Not affected' The processor is not vulnerable
  98. 'Mitigation: PTE Inversion' The host protection is active
  99. =========================== ===============================
  100. If KVM/VMX is enabled and the processor is vulnerable then the following
  101. information is appended to the 'Mitigation: PTE Inversion' part:
  102. - SMT status:
  103. ===================== ================
  104. 'VMX: SMT vulnerable' SMT is enabled
  105. 'VMX: SMT disabled' SMT is disabled
  106. ===================== ================
  107. - L1D Flush mode:
  108. ================================ ====================================
  109. 'L1D vulnerable' L1D flushing is disabled
  110. 'L1D conditional cache flushes' L1D flush is conditionally enabled
  111. 'L1D cache flushes' L1D flush is unconditionally enabled
  112. ================================ ====================================
  113. The resulting grade of protection is discussed in the following sections.
  114. Host mitigation mechanism
  115. -------------------------
  116. The kernel is unconditionally protected against L1TF attacks from malicious
  117. user space running on the host.
  118. Guest mitigation mechanisms
  119. ---------------------------
  120. .. _l1d_flush:
  121. 1. L1D flush on VMENTER
  122. ^^^^^^^^^^^^^^^^^^^^^^^
  123. To make sure that a guest cannot attack data which is present in the L1D
  124. the hypervisor flushes the L1D before entering the guest.
  125. Flushing the L1D evicts not only the data which should not be accessed
  126. by a potentially malicious guest, it also flushes the guest
  127. data. Flushing the L1D has a performance impact as the processor has to
  128. bring the flushed guest data back into the L1D. Depending on the
  129. frequency of VMEXIT/VMENTER and the type of computations in the guest
  130. performance degradation in the range of 1% to 50% has been observed. For
  131. scenarios where guest VMEXIT/VMENTER are rare the performance impact is
  132. minimal. Virtio and mechanisms like posted interrupts are designed to
  133. confine the VMEXITs to a bare minimum, but specific configurations and
  134. application scenarios might still suffer from a high VMEXIT rate.
  135. The kernel provides two L1D flush modes:
  136. - conditional ('cond')
  137. - unconditional ('always')
  138. The conditional mode avoids L1D flushing after VMEXITs which execute
  139. only audited code paths before the corresponding VMENTER. These code
  140. paths have been verified that they cannot expose secrets or other
  141. interesting data to an attacker, but they can leak information about the
  142. address space layout of the hypervisor.
  143. Unconditional mode flushes L1D on all VMENTER invocations and provides
  144. maximum protection. It has a higher overhead than the conditional
  145. mode. The overhead cannot be quantified correctly as it depends on the
  146. workload scenario and the resulting number of VMEXITs.
  147. The general recommendation is to enable L1D flush on VMENTER. The kernel
  148. defaults to conditional mode on affected processors.
  149. **Note**, that L1D flush does not prevent the SMT problem because the
  150. sibling thread will also bring back its data into the L1D which makes it
  151. attackable again.
  152. L1D flush can be controlled by the administrator via the kernel command
  153. line and sysfs control files. See :ref:`mitigation_control_command_line`
  154. and :ref:`mitigation_control_kvm`.
  155. .. _guest_confinement:
  156. 2. Guest VCPU confinement to dedicated physical cores
  157. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  158. To address the SMT problem, it is possible to make a guest or a group of
  159. guests affine to one or more physical cores. The proper mechanism for
  160. that is to utilize exclusive cpusets to ensure that no other guest or
  161. host tasks can run on these cores.
  162. If only a single guest or related guests run on sibling SMT threads on
  163. the same physical core then they can only attack their own memory and
  164. restricted parts of the host memory.
  165. Host memory is attackable, when one of the sibling SMT threads runs in
  166. host OS (hypervisor) context and the other in guest context. The amount
  167. of valuable information from the host OS context depends on the context
  168. which the host OS executes, i.e. interrupts, soft interrupts and kernel
  169. threads. The amount of valuable data from these contexts cannot be
  170. declared as non-interesting for an attacker without deep inspection of
  171. the code.
  172. **Note**, that assigning guests to a fixed set of physical cores affects
  173. the ability of the scheduler to do load balancing and might have
  174. negative effects on CPU utilization depending on the hosting
  175. scenario. Disabling SMT might be a viable alternative for particular
  176. scenarios.
  177. For further information about confining guests to a single or to a group
  178. of cores consult the cpusets documentation:
  179. https://www.kernel.org/doc/Documentation/cgroup-v1/cpusets.txt
  180. .. _interrupt_isolation:
  181. 3. Interrupt affinity
  182. ^^^^^^^^^^^^^^^^^^^^^
  183. Interrupts can be made affine to logical CPUs. This is not universally
  184. true because there are types of interrupts which are truly per CPU
  185. interrupts, e.g. the local timer interrupt. Aside of that multi queue
  186. devices affine their interrupts to single CPUs or groups of CPUs per
  187. queue without allowing the administrator to control the affinities.
  188. Moving the interrupts, which can be affinity controlled, away from CPUs
  189. which run untrusted guests, reduces the attack vector space.
  190. Whether the interrupts with are affine to CPUs, which run untrusted
  191. guests, provide interesting data for an attacker depends on the system
  192. configuration and the scenarios which run on the system. While for some
  193. of the interrupts it can be assumed that they won't expose interesting
  194. information beyond exposing hints about the host OS memory layout, there
  195. is no way to make general assumptions.
  196. Interrupt affinity can be controlled by the administrator via the
  197. /proc/irq/$NR/smp_affinity[_list] files. Limited documentation is
  198. available at:
  199. https://www.kernel.org/doc/Documentation/IRQ-affinity.txt
  200. .. _smt_control:
  201. 4. SMT control
  202. ^^^^^^^^^^^^^^
  203. To prevent the SMT issues of L1TF it might be necessary to disable SMT
  204. completely. Disabling SMT can have a significant performance impact, but
  205. the impact depends on the hosting scenario and the type of workloads.
  206. The impact of disabling SMT needs also to be weighted against the impact
  207. of other mitigation solutions like confining guests to dedicated cores.
  208. The kernel provides a sysfs interface to retrieve the status of SMT and
  209. to control it. It also provides a kernel command line interface to
  210. control SMT.
  211. The kernel command line interface consists of the following options:
  212. =========== ==========================================================
  213. nosmt Affects the bring up of the secondary CPUs during boot. The
  214. kernel tries to bring all present CPUs online during the
  215. boot process. "nosmt" makes sure that from each physical
  216. core only one - the so called primary (hyper) thread is
  217. activated. Due to a design flaw of Intel processors related
  218. to Machine Check Exceptions the non primary siblings have
  219. to be brought up at least partially and are then shut down
  220. again. "nosmt" can be undone via the sysfs interface.
  221. nosmt=force Has the same effect as "nosmt" but it does not allow to
  222. undo the SMT disable via the sysfs interface.
  223. =========== ==========================================================
  224. The sysfs interface provides two files:
  225. - /sys/devices/system/cpu/smt/control
  226. - /sys/devices/system/cpu/smt/active
  227. /sys/devices/system/cpu/smt/control:
  228. This file allows to read out the SMT control state and provides the
  229. ability to disable or (re)enable SMT. The possible states are:
  230. ============== ===================================================
  231. on SMT is supported by the CPU and enabled. All
  232. logical CPUs can be onlined and offlined without
  233. restrictions.
  234. off SMT is supported by the CPU and disabled. Only
  235. the so called primary SMT threads can be onlined
  236. and offlined without restrictions. An attempt to
  237. online a non-primary sibling is rejected
  238. forceoff Same as 'off' but the state cannot be controlled.
  239. Attempts to write to the control file are rejected.
  240. notsupported The processor does not support SMT. It's therefore
  241. not affected by the SMT implications of L1TF.
  242. Attempts to write to the control file are rejected.
  243. ============== ===================================================
  244. The possible states which can be written into this file to control SMT
  245. state are:
  246. - on
  247. - off
  248. - forceoff
  249. /sys/devices/system/cpu/smt/active:
  250. This file reports whether SMT is enabled and active, i.e. if on any
  251. physical core two or more sibling threads are online.
  252. SMT control is also possible at boot time via the l1tf kernel command
  253. line parameter in combination with L1D flush control. See
  254. :ref:`mitigation_control_command_line`.
  255. 5. Disabling EPT
  256. ^^^^^^^^^^^^^^^^
  257. Disabling EPT for virtual machines provides full mitigation for L1TF even
  258. with SMT enabled, because the effective page tables for guests are
  259. managed and sanitized by the hypervisor. Though disabling EPT has a
  260. significant performance impact especially when the Meltdown mitigation
  261. KPTI is enabled.
  262. EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
  263. There is ongoing research and development for new mitigation mechanisms to
  264. address the performance impact of disabling SMT or EPT.
  265. .. _mitigation_control_command_line:
  266. Mitigation control on the kernel command line
  267. ---------------------------------------------
  268. The kernel command line allows to control the L1TF mitigations at boot
  269. time with the option "l1tf=". The valid arguments for this option are:
  270. ============ =============================================================
  271. full Provides all available mitigations for the L1TF
  272. vulnerability. Disables SMT and enables all mitigations in
  273. the hypervisors, i.e. unconditional L1D flushing
  274. SMT control and L1D flush control via the sysfs interface
  275. is still possible after boot. Hypervisors will issue a
  276. warning when the first VM is started in a potentially
  277. insecure configuration, i.e. SMT enabled or L1D flush
  278. disabled.
  279. full,force Same as 'full', but disables SMT and L1D flush runtime
  280. control. Implies the 'nosmt=force' command line option.
  281. (i.e. sysfs control of SMT is disabled.)
  282. flush Leaves SMT enabled and enables the default hypervisor
  283. mitigation, i.e. conditional L1D flushing
  284. SMT control and L1D flush control via the sysfs interface
  285. is still possible after boot. Hypervisors will issue a
  286. warning when the first VM is started in a potentially
  287. insecure configuration, i.e. SMT enabled or L1D flush
  288. disabled.
  289. flush,nosmt Disables SMT and enables the default hypervisor mitigation,
  290. i.e. conditional L1D flushing.
  291. SMT control and L1D flush control via the sysfs interface
  292. is still possible after boot. Hypervisors will issue a
  293. warning when the first VM is started in a potentially
  294. insecure configuration, i.e. SMT enabled or L1D flush
  295. disabled.
  296. flush,nowarn Same as 'flush', but hypervisors will not warn when a VM is
  297. started in a potentially insecure configuration.
  298. off Disables hypervisor mitigations and doesn't emit any
  299. warnings.
  300. It also drops the swap size and available RAM limit restrictions
  301. on both hypervisor and bare metal.
  302. ============ =============================================================
  303. The default is 'flush'. For details about L1D flushing see :ref:`l1d_flush`.
  304. .. _mitigation_control_kvm:
  305. Mitigation control for KVM - module parameter
  306. -------------------------------------------------------------
  307. The KVM hypervisor mitigation mechanism, flushing the L1D cache when
  308. entering a guest, can be controlled with a module parameter.
  309. The option/parameter is "kvm-intel.vmentry_l1d_flush=". It takes the
  310. following arguments:
  311. ============ ==============================================================
  312. always L1D cache flush on every VMENTER.
  313. cond Flush L1D on VMENTER only when the code between VMEXIT and
  314. VMENTER can leak host memory which is considered
  315. interesting for an attacker. This still can leak host memory
  316. which allows e.g. to determine the hosts address space layout.
  317. never Disables the mitigation
  318. ============ ==============================================================
  319. The parameter can be provided on the kernel command line, as a module
  320. parameter when loading the modules and at runtime modified via the sysfs
  321. file:
  322. /sys/module/kvm_intel/parameters/vmentry_l1d_flush
  323. The default is 'cond'. If 'l1tf=full,force' is given on the kernel command
  324. line, then 'always' is enforced and the kvm-intel.vmentry_l1d_flush
  325. module parameter is ignored and writes to the sysfs file are rejected.
  326. .. _mitigation_selection:
  327. Mitigation selection guide
  328. --------------------------
  329. 1. No virtualization in use
  330. ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  331. The system is protected by the kernel unconditionally and no further
  332. action is required.
  333. 2. Virtualization with trusted guests
  334. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  335. If the guest comes from a trusted source and the guest OS kernel is
  336. guaranteed to have the L1TF mitigations in place the system is fully
  337. protected against L1TF and no further action is required.
  338. To avoid the overhead of the default L1D flushing on VMENTER the
  339. administrator can disable the flushing via the kernel command line and
  340. sysfs control files. See :ref:`mitigation_control_command_line` and
  341. :ref:`mitigation_control_kvm`.
  342. 3. Virtualization with untrusted guests
  343. ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  344. 3.1. SMT not supported or disabled
  345. """"""""""""""""""""""""""""""""""
  346. If SMT is not supported by the processor or disabled in the BIOS or by
  347. the kernel, it's only required to enforce L1D flushing on VMENTER.
  348. Conditional L1D flushing is the default behaviour and can be tuned. See
  349. :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
  350. 3.2. EPT not supported or disabled
  351. """"""""""""""""""""""""""""""""""
  352. If EPT is not supported by the processor or disabled in the hypervisor,
  353. the system is fully protected. SMT can stay enabled and L1D flushing on
  354. VMENTER is not required.
  355. EPT can be disabled in the hypervisor via the 'kvm-intel.ept' parameter.
  356. 3.3. SMT and EPT supported and active
  357. """""""""""""""""""""""""""""""""""""
  358. If SMT and EPT are supported and active then various degrees of
  359. mitigations can be employed:
  360. - L1D flushing on VMENTER:
  361. L1D flushing on VMENTER is the minimal protection requirement, but it
  362. is only potent in combination with other mitigation methods.
  363. Conditional L1D flushing is the default behaviour and can be tuned. See
  364. :ref:`mitigation_control_command_line` and :ref:`mitigation_control_kvm`.
  365. - Guest confinement:
  366. Confinement of guests to a single or a group of physical cores which
  367. are not running any other processes, can reduce the attack surface
  368. significantly, but interrupts, soft interrupts and kernel threads can
  369. still expose valuable data to a potential attacker. See
  370. :ref:`guest_confinement`.
  371. - Interrupt isolation:
  372. Isolating the guest CPUs from interrupts can reduce the attack surface
  373. further, but still allows a malicious guest to explore a limited amount
  374. of host physical memory. This can at least be used to gain knowledge
  375. about the host address space layout. The interrupts which have a fixed
  376. affinity to the CPUs which run the untrusted guests can depending on
  377. the scenario still trigger soft interrupts and schedule kernel threads
  378. which might expose valuable information. See
  379. :ref:`interrupt_isolation`.
  380. The above three mitigation methods combined can provide protection to a
  381. certain degree, but the risk of the remaining attack surface has to be
  382. carefully analyzed. For full protection the following methods are
  383. available:
  384. - Disabling SMT:
  385. Disabling SMT and enforcing the L1D flushing provides the maximum
  386. amount of protection. This mitigation is not depending on any of the
  387. above mitigation methods.
  388. SMT control and L1D flushing can be tuned by the command line
  389. parameters 'nosmt', 'l1tf', 'kvm-intel.vmentry_l1d_flush' and at run
  390. time with the matching sysfs control files. See :ref:`smt_control`,
  391. :ref:`mitigation_control_command_line` and
  392. :ref:`mitigation_control_kvm`.
  393. - Disabling EPT:
  394. Disabling EPT provides the maximum amount of protection as well. It is
  395. not depending on any of the above mitigation methods. SMT can stay
  396. enabled and L1D flushing is not required, but the performance impact is
  397. significant.
  398. EPT can be disabled in the hypervisor via the 'kvm-intel.ept'
  399. parameter.
  400. 3.4. Nested virtual machines
  401. """"""""""""""""""""""""""""
  402. When nested virtualization is in use, three operating systems are involved:
  403. the bare metal hypervisor, the nested hypervisor and the nested virtual
  404. machine. VMENTER operations from the nested hypervisor into the nested
  405. guest will always be processed by the bare metal hypervisor. If KVM is the
  406. bare metal hypervisor it will:
  407. - Flush the L1D cache on every switch from the nested hypervisor to the
  408. nested virtual machine, so that the nested hypervisor's secrets are not
  409. exposed to the nested virtual machine;
  410. - Flush the L1D cache on every switch from the nested virtual machine to
  411. the nested hypervisor; this is a complex operation, and flushing the L1D
  412. cache avoids that the bare metal hypervisor's secrets are exposed to the
  413. nested virtual machine;
  414. - Instruct the nested hypervisor to not perform any L1D cache flush. This
  415. is an optimization to avoid double L1D flushing.
  416. .. _default_mitigations:
  417. Default mitigations
  418. -------------------
  419. The kernel default mitigations for vulnerable processors are:
  420. - PTE inversion to protect against malicious user space. This is done
  421. unconditionally and cannot be controlled. The swap storage is limited
  422. to ~16TB.
  423. - L1D conditional flushing on VMENTER when EPT is enabled for
  424. a guest.
  425. The kernel does not by default enforce the disabling of SMT, which leaves
  426. SMT systems vulnerable when running untrusted guests with EPT enabled.
  427. The rationale for this choice is:
  428. - Force disabling SMT can break existing setups, especially with
  429. unattended updates.
  430. - If regular users run untrusted guests on their machine, then L1TF is
  431. just an add on to other malware which might be embedded in an untrusted
  432. guest, e.g. spam-bots or attacks on the local network.
  433. There is no technical way to prevent a user from running untrusted code
  434. on their machines blindly.
  435. - It's technically extremely unlikely and from today's knowledge even
  436. impossible that L1TF can be exploited via the most popular attack
  437. mechanisms like JavaScript because these mechanisms have no way to
  438. control PTEs. If this would be possible and not other mitigation would
  439. be possible, then the default might be different.
  440. - The administrators of cloud and hosting setups have to carefully
  441. analyze the risk for their scenarios and make the appropriate
  442. mitigation choices, which might even vary across their deployed
  443. machines and also result in other changes of their overall setup.
  444. There is no way for the kernel to provide a sensible default for this
  445. kind of scenarios.