commit f5ce9c409e4924c6c12d31453cce01400ff5481c Author: Greg Kroah-Hartman Date: Sat Sep 2 09:13:30 2023 +0200 Linux 6.5.1 Link: https://lore.kernel.org/r/20230831110830.817738361@linuxfoundation.org Tested-by: Ron Economos Tested-by: Bagas Sanjaya Tested-by: Justin M. Forbes Tested-by: Linux Kernel Functional Testing Tested-by: Shuah Khan Tested-by: Jon Hunter Tested-by: Guenter Roeck Signed-off-by: Greg Kroah-Hartman commit 7ed58960005f770107ab9687a0fcac29bf5564a2 Author: Yonghong Song Date: Thu Aug 24 20:46:59 2023 -0700 kallsyms: Fix kallsyms_selftest failure commit 33f0467fe06934d5e4ea6e24ce2b9c65ce618e26 upstream. Kernel test robot reported a kallsyms_test failure when clang lto is enabled (thin or full) and CONFIG_KALLSYMS_SELFTEST is also enabled. I can reproduce in my local environment with the following error message with thin lto: [ 1.877897] kallsyms_selftest: Test for 1750th symbol failed: (tsc_cs_mark_unstable) addr=ffffffff81038090 [ 1.877901] kallsyms_selftest: abort It appears that commit 8cc32a9bbf29 ("kallsyms: strip LTO-only suffixes from promoted global functions") caused the failure. Commit 8cc32a9bbf29 changed cleanup_symbol_name() based on ".llvm." instead of '.' where ".llvm." is appended to a before-lto-optimization local symbol name. We need to propagate such knowledge in kallsyms_selftest.c as well. Further more, compare_symbol_name() in kallsyms.c needs change as well. In scripts/kallsyms.c, kallsyms_names and kallsyms_seqs_of_names are used to record symbol names themselves and index to symbol names respectively. For example: kallsyms_names: ... __amd_smn_rw._entry <== seq 1000 __amd_smn_rw._entry.5 <== seq 1001 __amd_smn_rw.llvm. <== seq 1002 ... kallsyms_seqs_of_names are sorted based on cleanup_symbol_name() through, so the order in kallsyms_seqs_of_names actually has index 1000: seq 1002 <== __amd_smn_rw.llvm. (actual symbol comparison using '__amd_smn_rw') index 1001: seq 1000 <== __amd_smn_rw._entry index 1002: seq 1001 <== __amd_smn_rw._entry.5 Let us say at a particular point, at index 1000, symbol '__amd_smn_rw.llvm.' is comparing to '__amd_smn_rw._entry' where '__amd_smn_rw._entry' is the one to search e.g., with function kallsyms_on_each_match_symbol(). The current implementation will find out '__amd_smn_rw._entry' is less than '__amd_smn_rw.llvm.' and then continue to search e.g., index 999 and never found a match although the actual index 1001 is a match. To fix this issue, let us do cleanup_symbol_name() first and then do comparison. In the above case, comparing '__amd_smn_rw' vs '__amd_smn_rw._entry' and '__amd_smn_rw._entry' being greater than '__amd_smn_rw', the next comparison will be > index 1000 and eventually index 1001 will be hit an a match is found. For any symbols not having '.llvm.' substr, there is no functionality change for compare_symbol_name(). Fixes: 8cc32a9bbf29 ("kallsyms: strip LTO-only suffixes from promoted global functions") Reported-by: kernel test robot Closes: https://lore.kernel.org/oe-lkp/202308232200.1c932a90-oliver.sang@intel.com Signed-off-by: Yonghong Song Reviewed-by: Song Liu Reviewed-by: Zhen Lei Link: https://lore.kernel.org/r/20230825034659.1037627-1-yonghong.song@linux.dev Cc: stable@vger.kernel.org Signed-off-by: Kees Cook Signed-off-by: Greg Kroah-Hartman commit 23520b4443fcf181356cf1354eae86a4967e71a2 Author: Helge Deller Date: Tue Aug 15 00:31:09 2023 +0200 lockdep: fix static memory detection even more commit 0a6b58c5cd0dfd7961e725212f0fc8dfc5d96195 upstream. On the parisc architecture, lockdep reports for all static objects which are in the __initdata section (e.g. "setup_done" in devtmpfs, "kthreadd_done" in init/main.c) this warning: INFO: trying to register non-static key. The warning itself is wrong, because those objects are in the __initdata section, but the section itself is on parisc outside of range from _stext to _end, which is why the static_obj() functions returns a wrong answer. While fixing this issue, I noticed that the whole existing check can be simplified a lot. Instead of checking against the _stext and _end symbols (which include code areas too) just check for the .data and .bss segments (since we check a data object). This can be done with the existing is_kernel_core_data() macro. In addition objects in the __initdata section can be checked with init_section_contains(), and is_kernel_rodata() allows keys to be in the _ro_after_init section. This partly reverts and simplifies commit bac59d18c701 ("x86/setup: Fix static memory detection"). Link: https://lkml.kernel.org/r/ZNqrLRaOi/3wPAdp@p100 Fixes: bac59d18c701 ("x86/setup: Fix static memory detection") Signed-off-by: Helge Deller Cc: Borislav Petkov Cc: Greg Kroah-Hartman Cc: Guenter Roeck Cc: Peter Zijlstra Cc: "Rafael J. Wysocki" Signed-off-by: Andrew Morton Signed-off-by: Greg Kroah-Hartman commit c8df2560f19675ad847372a651c2cef05908c48c Author: Eric Dumazet Date: Thu Jul 20 11:09:01 2023 +0000 ipv6: remove hard coded limitation on ipv6_pinfo commit f5f80e32de12fad2813d37270e8364a03e6d3ef0 upstream. IPv6 inet sockets are supposed to have a "struct ipv6_pinfo" field at the end of their definition, so that inet6_sk_generic() can derive from socket size the offset of the "struct ipv6_pinfo". This is very fragile, and prevents adding bigger alignment in sockets, because inet6_sk_generic() does not work if the compiler adds padding after the ipv6_pinfo component. We are currently working on a patch series to reorganize TCP structures for better data locality and found issues similar to the one fixed in commit f5d547676ca0 ("tcp: fix tcp_inet6_sk() for 32bit kernels") Alternative would be to force an alignment on "struct ipv6_pinfo", greater or equal to __alignof__(any ipv6 sock) to ensure there is no padding. This does not look great. v2: fix typo in mptcp_proto_v6_init() (Paolo) Signed-off-by: Eric Dumazet Cc: Chao Wu Cc: Wei Wang Cc: Coco Li Cc: YiFei Zhu Reviewed-by: Simon Horman Signed-off-by: David S. Miller Cc: Jakub Kicinski Signed-off-by: Greg Kroah-Hartman commit c1b8d8c11e88e4fd926cc390e89a20326f8a45d2 Author: Andrea Righi Date: Tue Aug 29 14:05:08 2023 +0200 module/decompress: use vmalloc() for zstd decompression workspace commit a419beac4a070aff63c520f36ebf7cb8a76a8ae5 upstream. Using kmalloc() to allocate the decompression workspace for zstd may trigger the following warning when large modules are loaded (i.e., xfs): [ 2.961884] WARNING: CPU: 1 PID: 254 at mm/page_alloc.c:4453 __alloc_pages+0x2c3/0x350 ... [ 2.989033] Call Trace: [ 2.989841] [ 2.990614] ? show_regs+0x6d/0x80 [ 2.991573] ? __warn+0x89/0x160 [ 2.992485] ? __alloc_pages+0x2c3/0x350 [ 2.993520] ? report_bug+0x17e/0x1b0 [ 2.994506] ? handle_bug+0x51/0xa0 [ 2.995474] ? exc_invalid_op+0x18/0x80 [ 2.996469] ? asm_exc_invalid_op+0x1b/0x20 [ 2.997530] ? module_zstd_decompress+0xdc/0x2a0 [ 2.998665] ? __alloc_pages+0x2c3/0x350 [ 2.999695] ? module_zstd_decompress+0xdc/0x2a0 [ 3.000821] __kmalloc_large_node+0x7a/0x150 [ 3.001920] __kmalloc+0xdb/0x170 [ 3.002824] module_zstd_decompress+0xdc/0x2a0 [ 3.003857] module_decompress+0x37/0xc0 [ 3.004688] init_module_from_file+0xd0/0x100 [ 3.005668] idempotent_init_module+0x11c/0x2b0 [ 3.006632] __x64_sys_finit_module+0x64/0xd0 [ 3.007568] do_syscall_64+0x59/0x90 [ 3.008373] ? ksys_read+0x73/0x100 [ 3.009395] ? exit_to_user_mode_prepare+0x30/0xb0 [ 3.010531] ? syscall_exit_to_user_mode+0x37/0x60 [ 3.011662] ? do_syscall_64+0x68/0x90 [ 3.012511] ? do_syscall_64+0x68/0x90 [ 3.013364] entry_SYSCALL_64_after_hwframe+0x6e/0xd8 However, continuous physical memory does not seem to be required in module_zstd_decompress(), so use vmalloc() instead, to prevent the warning and avoid potential failures at loading compressed modules. Fixes: 169a58ad824d ("module/decompress: Support zstd in-kernel decompression") Signed-off-by: Andrea Righi Signed-off-by: Luis Chamberlain Signed-off-by: Greg Kroah-Hartman commit 7e339e062f1549f460094cd9cdcad23b381e0ac6 Author: James Morse Date: Tue Aug 1 14:54:09 2023 +0000 ARM: module: Use module_init_layout_section() to spot init sections commit a6846234f45801441f0e31a8b37f901ef0abd2df upstream. Today module_frob_arch_sections() spots init sections from their 'init' prefix, and uses this to keep the init PLTs separate from the rest. get_module_plt() uses within_module_init() to determine if a location is in the init text or not, but this depends on whether core code thought this was an init section. Naturally the logic is different. module_init_layout_section() groups the init and exit text together if module unloading is disabled, as the exit code will never run. The result is kernels with this configuration can't load all their modules because there are not enough PLTs for the combined init+exit section. A previous patch exposed module_init_layout_section(), use that so the logic is the same. Fixes: 055f23b74b20 ("module: check for exit sections in layout_sections() instead of module_init_section()") Cc: stable@vger.kernel.org Signed-off-by: James Morse Signed-off-by: Luis Chamberlain Signed-off-by: Greg Kroah-Hartman commit a67496bf9009159171185462db933097b7407726 Author: James Morse Date: Tue Aug 1 14:54:08 2023 +0000 arm64: module: Use module_init_layout_section() to spot init sections commit f928f8b1a2496e7af95b860f9acf553f20f68f16 upstream. Today module_frob_arch_sections() spots init sections from their 'init' prefix, and uses this to keep the init PLTs separate from the rest. module_emit_plt_entry() uses within_module_init() to determine if a location is in the init text or not, but this depends on whether core code thought this was an init section. Naturally the logic is different. module_init_layout_section() groups the init and exit text together if module unloading is disabled, as the exit code will never run. The result is kernels with this configuration can't load all their modules because there are not enough PLTs for the combined init+exit section. This results in the following: | WARNING: CPU: 2 PID: 51 at arch/arm64/kernel/module-plts.c:99 module_emit_plt_entry+0x184/0x1cc | Modules linked in: crct10dif_common | CPU: 2 PID: 51 Comm: modprobe Not tainted 6.5.0-rc4-yocto-standard-dirty #15208 | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 | pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : module_emit_plt_entry+0x184/0x1cc | lr : module_emit_plt_entry+0x94/0x1cc | sp : ffffffc0803bba60 [...] | Call trace: | module_emit_plt_entry+0x184/0x1cc | apply_relocate_add+0x2bc/0x8e4 | load_module+0xe34/0x1bd4 | init_module_from_file+0x84/0xc0 | __arm64_sys_finit_module+0x1b8/0x27c | invoke_syscall.constprop.0+0x5c/0x104 | do_el0_svc+0x58/0x160 | el0_svc+0x38/0x110 | el0t_64_sync_handler+0xc0/0xc4 | el0t_64_sync+0x190/0x194 A previous patch exposed module_init_layout_section(), use that so the logic is the same. Reported-by: Adam Johnston Tested-by: Adam Johnston Fixes: 055f23b74b20 ("module: check for exit sections in layout_sections() instead of module_init_section()") Cc: # 5.15.x: 60a0aab7463ee69 arm64: module-plts: inline linux/moduleloader.h Cc: # 5.15.x Signed-off-by: James Morse Acked-by: Catalin Marinas Signed-off-by: Luis Chamberlain Signed-off-by: Greg Kroah-Hartman commit 3f3591979634bd292ac99d1301ff7a0b66bf866e Author: James Morse Date: Tue Aug 1 14:54:07 2023 +0000 module: Expose module_init_layout_section() commit 2abcc4b5a64a65a2d2287ba0be5c2871c1552416 upstream. module_init_layout_section() choses whether the core module loader considers a section as init or not. This affects the placement of the exit section when module unloading is disabled. This code will never run, so it can be free()d once the module has been initialised. arm and arm64 need to count the number of PLTs they need before applying relocations based on the section name. The init PLTs are stored separately so they can be free()d. arm and arm64 both use within_module_init() to decide which list of PLTs to use when applying the relocation. Because within_module_init()'s behaviour changes when module unloading is disabled, both architecture would need to take this into account when counting the PLTs. Today neither architecture does this, meaning when module unloading is disabled there are insufficient PLTs in the init section to load some modules, resulting in warnings: | WARNING: CPU: 2 PID: 51 at arch/arm64/kernel/module-plts.c:99 module_emit_plt_entry+0x184/0x1cc | Modules linked in: crct10dif_common | CPU: 2 PID: 51 Comm: modprobe Not tainted 6.5.0-rc4-yocto-standard-dirty #15208 | Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 | pstate: 20400005 (nzCv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--) | pc : module_emit_plt_entry+0x184/0x1cc | lr : module_emit_plt_entry+0x94/0x1cc | sp : ffffffc0803bba60 [...] | Call trace: | module_emit_plt_entry+0x184/0x1cc | apply_relocate_add+0x2bc/0x8e4 | load_module+0xe34/0x1bd4 | init_module_from_file+0x84/0xc0 | __arm64_sys_finit_module+0x1b8/0x27c | invoke_syscall.constprop.0+0x5c/0x104 | do_el0_svc+0x58/0x160 | el0_svc+0x38/0x110 | el0t_64_sync_handler+0xc0/0xc4 | el0t_64_sync+0x190/0x194 Instead of duplicating module_init_layout_section()s logic, expose it. Reported-by: Adam Johnston Fixes: 055f23b74b20 ("module: check for exit sections in layout_sections() instead of module_init_section()") Cc: stable@vger.kernel.org Signed-off-by: James Morse Signed-off-by: Luis Chamberlain Signed-off-by: Greg Kroah-Hartman commit 4e726924955825f44b6d2281ed6e1bc82a20360c Author: Mario Limonciello Date: Wed Jul 12 12:24:59 2023 -0500 ACPI: thermal: Drop nocrt parameter commit 5f641174a12b8a876a4101201a21ef4675ecc014 upstream. The `nocrt` module parameter has no code associated with it and does nothing. As `crt=-1` has same functionality as what nocrt should be doing drop `nocrt` and associated documentation. This should fix a quirk for Gigabyte GA-7ZX that used `nocrt` and thus didn't function properly. Fixes: 8c99fdce3078 ("ACPI: thermal: set "thermal.nocrt" via DMI on Gigabyte GA-7ZX") Signed-off-by: Mario Limonciello Cc: All applicable Signed-off-by: Rafael J. Wysocki Signed-off-by: Greg Kroah-Hartman