2024-05-05 02:03:23 +00:00

32 lines
5.9 KiB
JSON

{
"id": "CVE-2024-26962",
"sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"published": "2024-05-01T06:15:12.527",
"lastModified": "2024-05-01T13:02:20.750",
"vulnStatus": "Awaiting Analysis",
"descriptions": [
{
"lang": "en",
"value": "In the Linux kernel, the following vulnerability has been resolved:\n\ndm-raid456, md/raid456: fix a deadlock for dm-raid456 while io concurrent with reshape\n\nFor raid456, if reshape is still in progress, then IO across reshape\nposition will wait for reshape to make progress. However, for dm-raid,\nin following cases reshape will never make progress hence IO will hang:\n\n1) the array is read-only;\n2) MD_RECOVERY_WAIT is set;\n3) MD_RECOVERY_FROZEN is set;\n\nAfter commit c467e97f079f (\"md/raid6: use valid sector values to determine\nif an I/O should wait on the reshape\") fix the problem that IO across\nreshape position doesn't wait for reshape, the dm-raid test\nshell/lvconvert-raid-reshape.sh start to hang:\n\n[root@fedora ~]# cat /proc/979/stack\n[<0>] wait_woken+0x7d/0x90\n[<0>] raid5_make_request+0x929/0x1d70 [raid456]\n[<0>] md_handle_request+0xc2/0x3b0 [md_mod]\n[<0>] raid_map+0x2c/0x50 [dm_raid]\n[<0>] __map_bio+0x251/0x380 [dm_mod]\n[<0>] dm_submit_bio+0x1f0/0x760 [dm_mod]\n[<0>] __submit_bio+0xc2/0x1c0\n[<0>] submit_bio_noacct_nocheck+0x17f/0x450\n[<0>] submit_bio_noacct+0x2bc/0x780\n[<0>] submit_bio+0x70/0xc0\n[<0>] mpage_readahead+0x169/0x1f0\n[<0>] blkdev_readahead+0x18/0x30\n[<0>] read_pages+0x7c/0x3b0\n[<0>] page_cache_ra_unbounded+0x1ab/0x280\n[<0>] force_page_cache_ra+0x9e/0x130\n[<0>] page_cache_sync_ra+0x3b/0x110\n[<0>] filemap_get_pages+0x143/0xa30\n[<0>] filemap_read+0xdc/0x4b0\n[<0>] blkdev_read_iter+0x75/0x200\n[<0>] vfs_read+0x272/0x460\n[<0>] ksys_read+0x7a/0x170\n[<0>] __x64_sys_read+0x1c/0x30\n[<0>] do_syscall_64+0xc6/0x230\n[<0>] entry_SYSCALL_64_after_hwframe+0x6c/0x74\n\nThis is because reshape can't make progress.\n\nFor md/raid, the problem doesn't exist because register new sync_thread\ndoesn't rely on the IO to be done any more:\n\n1) If array is read-only, it can switch to read-write by ioctl/sysfs;\n2) md/raid never set MD_RECOVERY_WAIT;\n3) If MD_RECOVERY_FROZEN is set, mddev_suspend() doesn't hold\n 'reconfig_mutex', hence it can be cleared and reshape can continue by\n sysfs api 'sync_action'.\n\nHowever, I'm not sure yet how to avoid the problem in dm-raid yet. This\npatch on the one hand make sure raid_message() can't change\nsync_thread() through raid_message() after presuspend(), on the other\nhand detect the above 3 cases before wait for IO do be done in\ndm_suspend(), and let dm-raid requeue those IO."
},
{
"lang": "es",
"value": "En el kernel de Linux, se resolvi\u00f3 la siguiente vulnerabilidad: dm-raid456, md/raid456: soluciona un punto muerto para dm-raid456 mientras io concurre con reshape. Para raid456, si el reshape todav\u00eda est\u00e1 en progreso, entonces IO en la posici\u00f3n de reshape esperar\u00e1 remodelar para progresar. Sin embargo, para dm-raid, en los siguientes casos la remodelaci\u00f3n nunca progresar\u00e1, por lo que IO se bloquear\u00e1: 1) la matriz es de solo lectura; 2) MD_RECOVERY_WAIT est\u00e1 configurado; 3) MD_RECOVERY_FROZEN est\u00e1 configurado; Despu\u00e9s de confirmar c467e97f079f (\"md/raid6: use valores de sector v\u00e1lidos para determinar si una E/S debe esperar a la remodelaci\u00f3n\") solucione el problema de que IO en la posici\u00f3n de remodelaci\u00f3n no espera a la remodelaci\u00f3n, la prueba dm-raid shell/lvconvert -raid-reshape.sh comienza a colgarse: [root@fedora ~]# cat /proc/979/stack [&lt;0&gt;] wait_woken+0x7d/0x90 [&lt;0&gt;] raid5_make_request+0x929/0x1d70 [raid456] [&lt;0 &gt;] md_handle_request+0xc2/0x3b0 [md_mod] [&lt;0&gt;] raid_map+0x2c/0x50 [dm_raid] [&lt;0&gt;] __map_bio+0x251/0x380 [dm_mod] [&lt;0&gt;] dm_submit_bio+0x1f0/0x760 [dm_mod] [ &lt;0&gt;] __submit_bio+0xc2/0x1c0 [&lt;0&gt;] submit_bio_noacct_nocheck+0x17f/0x450 [&lt;0&gt;] submit_bio_noacct+0x2bc/0x780 [&lt;0&gt;] submit_bio+0x70/0xc0 [&lt;0&gt;] mpage_readahead+0x169/0x1f0 [ &lt;0&gt;] blkdev_readahead+0x18/0x30 [&lt;0&gt;] read_pages+0x7c/0x3b0 [&lt;0&gt;] page_cache_ra_unbounded+0x1ab/0x280 [&lt;0&gt;] force_page_cache_ra+0x9e/0x130 [&lt;0&gt;] page_cache_sync_ra+0x3b/0x110 [ &lt;0&gt;] filemap_get_pages+0x143/0xa30 [&lt;0&gt;] filemap_read+0xdc/0x4b0 [&lt;0&gt;] blkdev_read_iter+0x75/0x200 [&lt;0&gt;] vfs_read+0x272/0x460 [&lt;0&gt;] ksys_read+0x7a/0x170 [ &lt;0&gt;] __x64_sys_read+0x1c/0x30 [&lt;0&gt;] do_syscall_64+0xc6/0x230 [&lt;0&gt;] Entry_SYSCALL_64_after_hwframe+0x6c/0x74 Esto se debe a que la remodelaci\u00f3n no puede progresar. Para md/raid, el problema no existe porque registrar un nuevo sync_thread ya no depende de que se realice la IO: 1) Si la matriz es de solo lectura, puede cambiar a lectura-escritura mediante ioctl/sysfs; 2) md/raid nunca configur\u00f3 MD_RECOVERY_WAIT; 3) Si se configura MD_RECOVERY_FROZEN, mddev_suspend() no contiene 'reconfig_mutex', por lo tanto, se puede borrar y la remodelaci\u00f3n puede continuar mediante sysfs api 'sync_action'. Sin embargo, todav\u00eda no estoy seguro de c\u00f3mo evitar el problema en dm-raid. Este parche, por un lado, garantiza que raid_message() no pueda cambiar sync_thread() a trav\u00e9s de raid_message() despu\u00e9s de presuspend(), por otro lado detecta los 3 casos anteriores antes de esperar a que IO se realice en dm_suspend(), y deja dm-raid pone en cola esas IO."
}
],
"metrics": {},
"references": [
{
"url": "https://git.kernel.org/stable/c/41425f96d7aa59bc865f60f5dda3d7697b555677",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
},
{
"url": "https://git.kernel.org/stable/c/5943a34bf6bab5801e08a55f63e1b8d5bc90dae1",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
},
{
"url": "https://git.kernel.org/stable/c/a8d249d770cb357d16a2097b548d2e4c1c137304",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
}
]
}