cve/2024/CVE-2024-56677.md
2025-09-29 16:08:36 +00:00

2.1 KiB

CVE-2024-56677

Description

In the Linux kernel, the following vulnerability has been resolved:powerpc/fadump: Move fadump_cma_init to setup_arch() after initmem_init()During early init CMA_MIN_ALIGNMENT_BYTES can be PAGE_SIZE,since pageblock_order is still zero and it gets initializedlater during initmem_init() e.g.setup_arch() -> initmem_init() -> sparse_init() -> set_pageblock_order()One such use case where this causes issue is -early_setup() -> early_init_devtree() -> fadump_reserve_mem() -> fadump_cma_init()This causes CMA memory alignment check to be bypassed incma_init_reserved_mem(). Then later cma_activate_area() can hita VM_BUG_ON_PAGE(pfn & ((1 << order) - 1)) if the reserved memoryarea was not pageblock_order aligned.Fix it by moving the fadump_cma_init() after initmem_init(),where other such cma reservations also gets called.==============page: refcount:0 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x10010flags: 0x13ffff800000000(node=1|zone=0|lastcpupid=0x7ffff) CMAraw: 013ffff800000000 5deadbeef0000100 5deadbeef0000122 0000000000000000raw: 0000000000000000 0000000000000000 00000000ffffffff 0000000000000000page dumped because: VM_BUG_ON_PAGE(pfn & ((1 << order) - 1))------------[ cut here ]------------kernel BUG at mm/page_alloc.c:778!Call Trace:__free_one_page+0x57c/0x7b0 (unreliable)free_pcppages_bulk+0x1a8/0x2c8free_unref_page_commit+0x3d4/0x4e4free_unref_page+0x458/0x6d0init_cma_reserved_pageblock+0x114/0x198cma_init_reserved_areas+0x270/0x3e0do_one_initcall+0x80/0x2f8kernel_init_freeable+0x33c/0x530kernel_init+0x34/0x26cret_from_kernel_user_thread+0x14/0x1c

POC

Reference

No PoCs from references.

Github