commit cd363bb9548ec3208120bb3f55ff4ded2487d7fb Author: Greg Kroah-Hartman Date: Sat Aug 26 13:27:01 2023 +0200 Linux 6.1.48 Link: https://lore.kernel.org/r/20230824141447.155846739@linuxfoundation.org Tested-by: Florian Fainelli Tested-by: SeongJae Park Tested-by: Joel Fernandes (Google) Tested-by: Sudip Mukherjee Tested-by: Linux Kernel Functional Testing Tested-by: Jon Hunter Tested-by: Conor Dooley Tested-by: Takeshi Ogasawara Tested-by: Shuah Khan Tested-by: Bagas Sanjaya Tested-by: Salvatore Bonaccorso Signed-off-by: Greg Kroah-Hartman commit 7487244912b13a918812b8f830e6a0dde269fdeb Author: Borislav Petkov (AMD) Date: Tue Aug 15 11:53:13 2023 +0200 x86/srso: Correct the mitigation status when SMT is disabled commit 6405b72e8d17bd1875a56ae52d23ec3cd51b9d66 upstream. Specify how is SRSO mitigated when SMT is disabled. Also, correct the SMT check for that. Fixes: e9fbc47b818b ("x86/srso: Disable the mitigation on unaffected configurations") Suggested-by: Josh Poimboeuf Signed-off-by: Borislav Petkov (AMD) Acked-by: Josh Poimboeuf Link: https://lore.kernel.org/r/20230814200813.p5czl47zssuej7nv@treble Signed-off-by: Greg Kroah-Hartman commit 4da4aae04b7f3d5ba4345a75f92de00a54644f11 Author: Peter Zijlstra Date: Wed Aug 16 13:59:21 2023 +0200 objtool/x86: Fixup frame-pointer vs rethunk commit dbf46008775516f7f25c95b7760041c286299783 upstream. For stack-validation of a frame-pointer build, objtool validates that every CALL instruction is preceded by a frame-setup. The new SRSO return thunks violate this with their RSB stuffing trickery. Extend the __fentry__ exception to also cover the embedded_insn case used for this. This cures: vmlinux.o: warning: objtool: srso_untrain_ret+0xd: call without frame pointer save/setup Fixes: 4ae68b26c3ab ("objtool/x86: Fix SRSO mess") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Acked-by: Josh Poimboeuf Link: https://lore.kernel.org/r/20230816115921.GH980931@hirez.programming.kicks-ass.net Signed-off-by: Greg Kroah-Hartman commit c8b056a3b4ebb33adbb873cab152ed499d1a1dcb Author: Petr Pavlu Date: Tue Jul 11 11:19:51 2023 +0200 x86/retpoline,kprobes: Fix position of thunk sections with CONFIG_LTO_CLANG commit 79cd2a11224eab86d6673fe8a11d2046ae9d2757 upstream. The linker script arch/x86/kernel/vmlinux.lds.S matches the thunk sections ".text.__x86.*" from arch/x86/lib/retpoline.S as follows: .text { [...] TEXT_TEXT [...] __indirect_thunk_start = .; *(.text.__x86.*) __indirect_thunk_end = .; [...] } Macro TEXT_TEXT references TEXT_MAIN which normally expands to only ".text". However, with CONFIG_LTO_CLANG, TEXT_MAIN becomes ".text .text.[0-9a-zA-Z_]*" which wrongly matches also the thunk sections. The output layout is then different than expected. For instance, the currently defined range [__indirect_thunk_start, __indirect_thunk_end] becomes empty. Prevent the problem by using ".." as the first separator, for example, ".text..__x86.indirect_thunk". This pattern is utilized by other explicit section names which start with one of the standard prefixes, such as ".text" or ".data", and that need to be individually selected in the linker script. [ nathan: Fix conflicts with SRSO and fold in fix issue brought up by Andrew Cooper in post-review: https://lore.kernel.org/20230803230323.1478869-1-andrew.cooper3@citrix.com ] Fixes: dc5723b02e52 ("kbuild: add support for Clang LTO") Signed-off-by: Petr Pavlu Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Nathan Chancellor Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230711091952.27944-2-petr.pavlu@suse.com Signed-off-by: Greg Kroah-Hartman commit dae93ed961a8a315cefe64491e8573c5d9e3484e Author: Borislav Petkov (AMD) Date: Sun Aug 13 12:39:34 2023 +0200 x86/srso: Disable the mitigation on unaffected configurations commit e9fbc47b818b964ddff5df5b2d5c0f5f32f4a147 upstream. Skip the srso cmd line parsing which is not needed on Zen1/2 with SMT disabled and with the proper microcode applied (latter should be the case anyway) as those are not affected. Fixes: 5a15d8348881 ("x86/srso: Tie SBPB bit setting to microcode patch detection") Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230813104517.3346-1-bp@alien8.de Signed-off-by: Greg Kroah-Hartman commit e4679a0342e05a962639a6ec3781f257f417f0ff Author: Borislav Petkov (AMD) Date: Fri Aug 11 23:38:24 2023 +0200 x86/CPU/AMD: Fix the DIV(0) initial fix attempt commit f58d6fbcb7c848b7f2469be339bc571f2e9d245b upstream. Initially, it was thought that doing an innocuous division in the #DE handler would take care to prevent any leaking of old data from the divider but by the time the fault is raised, the speculation has already advanced too far and such data could already have been used by younger operations. Therefore, do the innocuous division on every exit to userspace so that userspace doesn't see any potentially old data from integer divisions in kernel space. Do the same before VMRUN too, to protect host data from leaking into the guest too. Fixes: 77245f1c3c64 ("x86/CPU/AMD: Do not leak quotient data after a division by 0") Signed-off-by: Borislav Petkov (AMD) Cc: Link: https://lore.kernel.org/r/20230811213824.10025-1-bp@alien8.de Signed-off-by: Greg Kroah-Hartman commit b41eb316c95c98d16dba2045c4890a3020f0a5bc Author: Sean Christopherson Date: Fri Aug 11 08:52:55 2023 -0700 x86/retpoline: Don't clobber RFLAGS during srso_safe_ret() commit ba5ca5e5e6a1d55923e88b4a83da452166f5560e upstream. Use LEA instead of ADD when adjusting %rsp in srso_safe_ret{,_alias}() so as to avoid clobbering flags. Drop one of the INT3 instructions to account for the LEA consuming one more byte than the ADD. KVM's emulator makes indirect calls into a jump table of sorts, where the destination of each call is a small blob of code that performs fast emulation by executing the target instruction with fixed operands. E.g. to emulate ADC, fastop() invokes adcb_al_dl(): adcb_al_dl: <+0>: adc %dl,%al <+2>: jmp <__x86_return_thunk> A major motivation for doing fast emulation is to leverage the CPU to handle consumption and manipulation of arithmetic flags, i.e. RFLAGS is both an input and output to the target of the call. fastop() collects the RFLAGS result by pushing RFLAGS onto the stack and popping them back into a variable (held in %rdi in this case): asm("push %[flags]; popf; " CALL_NOSPEC " ; pushf; pop %[flags]\n" <+71>: mov 0xc0(%r8),%rdx <+78>: mov 0x100(%r8),%rcx <+85>: push %rdi <+86>: popf <+87>: call *%rsi <+89>: nop <+90>: nop <+91>: nop <+92>: pushf <+93>: pop %rdi and then propagating the arithmetic flags into the vCPU's emulator state: ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK); <+64>: and $0xfffffffffffff72a,%r9 <+94>: and $0x8d5,%edi <+109>: or %rdi,%r9 <+122>: mov %r9,0x10(%r8) The failures can be most easily reproduced by running the "emulator" test in KVM-Unit-Tests. If you're feeling a bit of deja vu, see commit b63f20a778c8 ("x86/retpoline: Don't clobber RFLAGS during CALL_NOSPEC on i386"). In addition, this breaks booting of clang-compiled guest on a gcc-compiled host where the host contains the %rsp-modifying SRSO mitigations. [ bp: Massage commit message, extend, remove addresses. ] Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Closes: https://lore.kernel.org/all/de474347-122d-54cd-eabf-9dcc95ab9eae@amd.com Reported-by: Srikanth Aithal Reported-by: Nathan Chancellor Signed-off-by: Sean Christopherson Signed-off-by: Borislav Petkov (AMD) Tested-by: Nathan Chancellor Cc: stable@vger.kernel.org Link: https://lore.kernel.org/20230810013334.GA5354@dev-arch.thelio-3990X/ Link: https://lore.kernel.org/r/20230811155255.250835-1-seanjc@google.com Signed-off-by: Greg Kroah-Hartman commit c1f831425fe900fd18789733b66f4c987345296c Author: Peter Zijlstra Date: Wed Aug 16 12:44:19 2023 +0200 x86/static_call: Fix __static_call_fixup() commit 54097309620ef0dc2d7083783dc521c6a5fef957 upstream. Christian reported spurious module load crashes after some of Song's module memory layout patches. Turns out that if the very last instruction on the very last page of the module is a 'JMP __x86_return_thunk' then __static_call_fixup() will trip a fault and die. And while the module rework made this slightly more likely to happen, it's always been possible. Fixes: ee88d363d156 ("x86,static_call: Use alternative RET encoding") Reported-by: Christian Bricart Signed-off-by: Peter Zijlstra (Intel) Acked-by: Josh Poimboeuf Link: https://lkml.kernel.org/r/20230816104419.GA982867@hirez.programming.kicks-ass.net Signed-off-by: Greg Kroah-Hartman commit c16d0b3baff418ca09c4f6a7e04f0bfc472187bf Author: Borislav Petkov (AMD) Date: Mon Aug 14 21:29:50 2023 +0200 x86/srso: Explain the untraining sequences a bit more commit 9dbd23e42ff0b10c9b02c9e649c76e5228241a8e upstream. The goal is to eventually have a proper documentation about all this. Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814164447.GFZNpZ/64H4lENIe94@fat_crate.local Signed-off-by: Greg Kroah-Hartman commit 529a9f087a7e03e87dd9c2acd687739e3e348045 Author: Peter Zijlstra Date: Mon Aug 14 13:44:34 2023 +0200 x86/cpu: Cleanup the untrain mess commit e7c25c441e9e0fa75b4c83e0b26306b702cfe90d upstream. Since there can only be one active return_thunk, there only needs be one (matching) untrain_ret. It fundamentally doesn't make sense to allow multiple untrain_ret at the same time. Fold all the 3 different untrain methods into a single (temporary) helper stub. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814121149.042774962@infradead.org Signed-off-by: Greg Kroah-Hartman commit e6b40d2cb5aae35cc3659f9b74c999a01120ad30 Author: Peter Zijlstra Date: Mon Aug 14 13:44:33 2023 +0200 x86/cpu: Rename srso_(.*)_alias to srso_alias_\1 commit 42be649dd1f2eee6b1fb185f1a231b9494cf095f upstream. For a more consistent namespace. [ bp: Fixup names in the doc too. ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814121148.976236447@infradead.org Signed-off-by: Greg Kroah-Hartman commit 54dde78a50a8eb0d6a293c8416fcf6365d189af3 Author: Peter Zijlstra Date: Mon Aug 14 13:44:32 2023 +0200 x86/cpu: Rename original retbleed methods commit d025b7bac07a6e90b6b98b487f88854ad9247c39 upstream. Rename the original retbleed return thunk and untrain_ret to retbleed_return_thunk() and retbleed_untrain_ret(). No functional changes. Suggested-by: Josh Poimboeuf Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814121148.909378169@infradead.org Signed-off-by: Greg Kroah-Hartman commit 44dbc912fd8a132f01eca8fc0928d6c9beb1f51c Author: Peter Zijlstra Date: Mon Aug 14 13:44:31 2023 +0200 x86/cpu: Clean up SRSO return thunk mess commit d43490d0ab824023e11d0b57d0aeec17a6e0ca13 upstream. Use the existing configurable return thunk. There is absolute no justification for having created this __x86_return_thunk alternative. To clarify, the whole thing looks like: Zen3/4 does: srso_alias_untrain_ret: nop2 lfence jmp srso_alias_return_thunk int3 srso_alias_safe_ret: // aliasses srso_alias_untrain_ret just so add $8, %rsp ret int3 srso_alias_return_thunk: call srso_alias_safe_ret ud2 While Zen1/2 does: srso_untrain_ret: movabs $foo, %rax lfence call srso_safe_ret (jmp srso_return_thunk ?) int3 srso_safe_ret: // embedded in movabs instruction add $8,%rsp ret int3 srso_return_thunk: call srso_safe_ret ud2 While retbleed does: zen_untrain_ret: test $0xcc, %bl lfence jmp zen_return_thunk int3 zen_return_thunk: // embedded in the test instruction ret int3 Where Zen1/2 flush the BTB entry using the instruction decoder trick (test,movabs) Zen3/4 use BTB aliasing. SRSO adds a return sequence (srso_safe_ret()) which forces the function return instruction to speculate into a trap (UD2). This RET will then mispredict and execution will continue at the return site read from the top of the stack. Pick one of three options at boot (evey function can only ever return once). [ bp: Fixup commit message uarch details and add them in a comment in the code too. Add a comment about the srso_select_mitigation() dependency on retbleed_select_mitigation(). Add moar ifdeffery for 32-bit builds. Add a dummy srso_untrain_ret_alias() definition for 32-bit alternatives needing the symbol. ] Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814121148.842775684@infradead.org Signed-off-by: Greg Kroah-Hartman commit 53ebbe1c8c02aa7b7f072dd2f96bca4faa1daa59 Author: Peter Zijlstra Date: Mon Aug 14 13:44:30 2023 +0200 x86/alternative: Make custom return thunk unconditional commit 095b8303f3835c68ac4a8b6d754ca1c3b6230711 upstream. There is infrastructure to rewrite return thunks to point to any random thunk one desires, unwrap that from CALL_THUNKS, which up to now was the sole user of that. [ bp: Make the thunks visible on 32-bit and add ifdeffery for the 32-bit builds. ] Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814121148.775293785@infradead.org Signed-off-by: Greg Kroah-Hartman commit 8bb1ed390d358f6c167f45a27305b775aa8fc898 Author: Peter Zijlstra Date: Mon Aug 14 13:44:28 2023 +0200 x86/cpu: Fix up srso_safe_ret() and __x86_return_thunk() commit af023ef335f13c8b579298fc432daeef609a9e60 upstream. vmlinux.o: warning: objtool: srso_untrain_ret() falls through to next function __x86_return_skl() vmlinux.o: warning: objtool: __x86_return_thunk() falls through to next function __x86_return_skl() This is because these functions (can) end with CALL, which objtool does not consider a terminating instruction. Therefore, replace the INT3 instruction (which is a non-fatal trap) with UD2 (which is a fatal-trap). This indicates execution will not continue past this point. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814121148.637802730@infradead.org Signed-off-by: Greg Kroah-Hartman commit 6e4dd7d2636d8e8fb634cc9e7fc84eb13928e88a Author: Peter Zijlstra Date: Mon Aug 14 13:44:27 2023 +0200 x86/cpu: Fix __x86_return_thunk symbol type commit 77f67119004296a9b2503b377d610e08b08afc2a upstream. Commit fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") reimplemented __x86_return_thunk with a mix of SYM_FUNC_START and SYM_CODE_END, this is not a sane combination. Since nothing should ever actually 'CALL' this, make it consistently CODE. Fixes: fb3bd914b3ec ("x86/srso: Add a Speculative RAS Overflow mitigation") Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Borislav Petkov (AMD) Link: https://lore.kernel.org/r/20230814121148.571027074@infradead.org Signed-off-by: Greg Kroah-Hartman