cve/2024/CVE-2024-27005.md
2024-05-25 21:48:12 +02:00

18 lines
2.3 KiB
Markdown

### [CVE-2024-27005](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-27005)
![](https://img.shields.io/static/v1?label=Product&message=Linux&color=blue)
![](https://img.shields.io/static/v1?label=Version&message=af42269c3523%3C%20d0d04efa2e36%20&color=brighgreen)
![](https://img.shields.io/static/v1?label=Vulnerability&message=n%2Fa&color=brighgreen)
### Description
In the Linux kernel, the following vulnerability has been resolved:interconnect: Don't access req_list while it's being manipulatedThe icc_lock mutex was split into separate icc_lock and icc_bw_lockmutexes in [1] to avoid lockdep splats. However, this didn't adequatelyprotect access to icc_node::req_list.The icc_set_bw() function will eventually iterate over req_list whileonly holding icc_bw_lock, but req_list can be modified while onlyholding icc_lock. This causes races between icc_set_bw(), of_icc_get(),and icc_put().Example A: CPU0 CPU1 ---- ---- icc_set_bw(path_a) mutex_lock(&icc_bw_lock); icc_put(path_b) mutex_lock(&icc_lock); aggregate_requests() hlist_for_each_entry(r, ... hlist_del(... <r = invalid pointer>Example B: CPU0 CPU1 ---- ---- icc_set_bw(path_a) mutex_lock(&icc_bw_lock); path_b = of_icc_get() of_icc_get_by_index() mutex_lock(&icc_lock); path_find() path_init() aggregate_requests() hlist_for_each_entry(r, ... hlist_add_head(... <r = invalid pointer>Fix this by ensuring icc_bw_lock is always held before manipulatingicc_node::req_list. The additional places icc_bw_lock is held don'tperform any memory allocations, so we should still be safe from theoriginal lockdep splats that motivated the separate locks.[1] commit af42269c3523 ("interconnect: Fix locking for runpm vs reclaim")
### POC
#### Reference
No PoCs from references.
#### Github
- https://github.com/fkie-cad/nvd-json-data-feeds