"value":"In the Linux kernel, the following vulnerability has been resolved:\n\npowerpc/kasan: Fix addr error caused by page alignment\n\nIn kasan_init_region, when k_start is not page aligned, at the begin of\nfor loop, k_cur = k_start & PAGE_MASK is less than k_start, and then\n`va = block + k_cur - k_start` is less than block, the addr va is invalid,\nbecause the memory address space from va to block is not alloced by\nmemblock_alloc, which will not be reserved by memblock_reserve later, it\nwill be used by other places.\n\nAs a result, memory overwriting occurs.\n\nfor example:\nint __init __weak kasan_init_region(void *start, size_t size)\n{\n[...]\n\t/* if say block(dcd97000) k_start(feef7400) k_end(feeff3fe) */\n\tblock = memblock_alloc(k_end - k_start, PAGE_SIZE);\n\t[...]\n\tfor (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {\n\t\t/* at the begin of for loop\n\t\t * block(dcd97000) va(dcd96c00) k_cur(feef7000) k_start(feef7400)\n\t\t * va(dcd96c00) is less than block(dcd97000), va is invalid\n\t\t */\n\t\tvoid *va = block + k_cur - k_start;\n\t\t[...]\n\t}\n[...]\n}\n\nTherefore, page alignment is performed on k_start before\nmemblock_alloc() to ensure the validity of the VA address."