### [CVE-2024-57945](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-57945) ![](https://img.shields.io/static/v1?label=Product&message=Linux&color=blue) ![](https://img.shields.io/static/v1?label=Version&message=8310080799b40fd9f2a8b808c657269678c149af%3C%2092f08673d3f1893191323572f60e3c62f2e57c2f%20&color=brighgreen) ![](https://img.shields.io/static/v1?label=Vulnerability&message=n%2Fa&color=brighgreen) ### Description In the Linux kernel, the following vulnerability has been resolved:riscv: mm: Fix the out of bound issue of vmemmap addressIn sparse vmemmap model, the virtual address of vmemmap is calculated as:((struct page *)VMEMMAP_START - (phys_ram_base >> PAGE_SHIFT)).And the struct page's va can be calculated with an offset:(vmemmap + (pfn)).However, when initializing struct pages, kernel actually starts from thefirst page from the same section that phys_ram_base belongs to. If thefirst page's physical address is not (phys_ram_base >> PAGE_SHIFT), thenwe get an va below VMEMMAP_START when calculating va for it's struct page.For example, if phys_ram_base starts from 0x82000000 with pfn 0x82000, thefirst page in the same section is actually pfn 0x80000. Duringinit_unavailable_range(), we will initialize struct page for pfn 0x80000with virtual address ((struct page *)VMEMMAP_START - 0x2000), which isbelow VMEMMAP_START as well as PCI_IO_END.This commit fixes this bug by introducing a new variable'vmemmap_start_pfn' which is aligned with memory section size and usingit to calculate vmemmap address instead of phys_ram_base. ### POC #### Reference No PoCs from references. #### Github - https://github.com/oogasawa/Utility-security