2025-01-05 03:03:46 +00:00

29 lines
8.5 KiB
JSON

{
"id": "CVE-2024-56559",
"sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"published": "2024-12-27T15:15:14.760",
"lastModified": "2024-12-27T15:15:14.760",
"vulnStatus": "Awaiting Analysis",
"cveTags": [],
"descriptions": [
{
"lang": "en",
"value": "In the Linux kernel, the following vulnerability has been resolved:\n\nmm/vmalloc: combine all TLB flush operations of KASAN shadow virtual address into one operation\n\nWhen compiling kernel source 'make -j $(nproc)' with the up-and-running\nKASAN-enabled kernel on a 256-core machine, the following soft lockup is\nshown:\n\nwatchdog: BUG: soft lockup - CPU#28 stuck for 22s! [kworker/28:1:1760]\nCPU: 28 PID: 1760 Comm: kworker/28:1 Kdump: loaded Not tainted 6.10.0-rc5 #95\nWorkqueue: events drain_vmap_area_work\nRIP: 0010:smp_call_function_many_cond+0x1d8/0xbb0\nCode: 38 c8 7c 08 84 c9 0f 85 49 08 00 00 8b 45 08 a8 01 74 2e 48 89 f1 49 89 f7 48 c1 e9 03 41 83 e7 07 4c 01 e9 41 83 c7 03 f3 90 <0f> b6 01 41 38 c7 7c 08 84 c0 0f 85 d4 06 00 00 8b 45 08 a8 01 75\nRSP: 0018:ffffc9000cb3fb60 EFLAGS: 00000202\nRAX: 0000000000000011 RBX: ffff8883bc4469c0 RCX: ffffed10776e9949\nRDX: 0000000000000002 RSI: ffff8883bb74ca48 RDI: ffffffff8434dc50\nRBP: ffff8883bb74ca40 R08: ffff888103585dc0 R09: ffff8884533a1800\nR10: 0000000000000004 R11: ffffffffffffffff R12: ffffed1077888d39\nR13: dffffc0000000000 R14: ffffed1077888d38 R15: 0000000000000003\nFS: 0000000000000000(0000) GS:ffff8883bc400000(0000) knlGS:0000000000000000\nCS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033\nCR2: 00005577b5c8d158 CR3: 0000000004850000 CR4: 0000000000350ef0\nCall Trace:\n <IRQ>\n ? watchdog_timer_fn+0x2cd/0x390\n ? __pfx_watchdog_timer_fn+0x10/0x10\n ? __hrtimer_run_queues+0x300/0x6d0\n ? sched_clock_cpu+0x69/0x4e0\n ? __pfx___hrtimer_run_queues+0x10/0x10\n ? srso_return_thunk+0x5/0x5f\n ? ktime_get_update_offsets_now+0x7f/0x2a0\n ? srso_return_thunk+0x5/0x5f\n ? srso_return_thunk+0x5/0x5f\n ? hrtimer_interrupt+0x2ca/0x760\n ? __sysvec_apic_timer_interrupt+0x8c/0x2b0\n ? sysvec_apic_timer_interrupt+0x6a/0x90\n </IRQ>\n <TASK>\n ? asm_sysvec_apic_timer_interrupt+0x16/0x20\n ? smp_call_function_many_cond+0x1d8/0xbb0\n ? __pfx_do_kernel_range_flush+0x10/0x10\n on_each_cpu_cond_mask+0x20/0x40\n flush_tlb_kernel_range+0x19b/0x250\n ? srso_return_thunk+0x5/0x5f\n ? kasan_release_vmalloc+0xa7/0xc0\n purge_vmap_node+0x357/0x820\n ? __pfx_purge_vmap_node+0x10/0x10\n __purge_vmap_area_lazy+0x5b8/0xa10\n drain_vmap_area_work+0x21/0x30\n process_one_work+0x661/0x10b0\n worker_thread+0x844/0x10e0\n ? srso_return_thunk+0x5/0x5f\n ? __kthread_parkme+0x82/0x140\n ? __pfx_worker_thread+0x10/0x10\n kthread+0x2a5/0x370\n ? __pfx_kthread+0x10/0x10\n ret_from_fork+0x30/0x70\n ? __pfx_kthread+0x10/0x10\n ret_from_fork_asm+0x1a/0x30\n </TASK>\n\nDebugging Analysis:\n\n 1. The following ftrace log shows that the lockup CPU spends too much\n time iterating vmap_nodes and flushing TLB when purging vm_area\n structures. (Some info is trimmed).\n\n kworker: funcgraph_entry: | drain_vmap_area_work() {\n kworker: funcgraph_entry: | mutex_lock() {\n kworker: funcgraph_entry: 1.092 us | __cond_resched();\n kworker: funcgraph_exit: 3.306 us | }\n ... ...\n kworker: funcgraph_entry: | flush_tlb_kernel_range() {\n ... ...\n kworker: funcgraph_exit: # 7533.649 us | }\n ... ...\n kworker: funcgraph_entry: 2.344 us | mutex_unlock();\n kworker: funcgraph_exit: $ 23871554 us | }\n\n The drain_vmap_area_work() spends over 23 seconds.\n\n There are 2805 flush_tlb_kernel_range() calls in the ftrace log.\n * One is called in __purge_vmap_area_lazy().\n * Others are called by purge_vmap_node->kasan_release_vmalloc.\n purge_vmap_node() iteratively releases kasan vmalloc\n allocations and flushes TLB for each vmap_area.\n - [Rough calculation] Each flush_tlb_kernel_range() runs\n about 7.5ms.\n -- 2804 * 7.5ms = 21.03 seconds.\n -- That's why a soft lock is triggered.\n\n 2. Extending the soft lockup time can work around the issue (For example,\n # echo\n---truncated---"
},
{
"lang": "es",
"value": "En el kernel de Linux, se ha resuelto la siguiente vulnerabilidad: mm/vmalloc: combina todas las operaciones de vaciado de TLB de la direcci\u00f3n virtual de sombra de KASAN en una sola operaci\u00f3n. Al compilar la fuente del kernel 'make -j $(nproc)' con el kernel habilitado para KASAN en funcionamiento en una m\u00e1quina de 256 n\u00facleos, se muestra el siguiente bloqueo suave: watchdog: BUG: soft lockup - \u00a1CPU n.\u00ba 28 atascada durante 22 s! [kworker/28:1:1760] CPU: 28 PID: 1760 Comm: kworker/28:1 Kdump: cargado No contaminado 6.10.0-rc5 #95 Cola de trabajo: eventos drain_vmap_area_work RIP: 0010:smp_call_function_many_cond+0x1d8/0xbb0 C\u00f3digo: 38 c8 7c 08 84 c9 0f 85 49 08 00 00 8b 45 08 a8 01 74 2e 48 89 f1 49 89 f7 48 c1 e9 03 41 83 e7 07 4c 01 e9 41 83 c7 03 f3 90 &lt;0f&gt; b6 01 41 38 c7 7c 08 84 c0 0f 85 d4 06 00 00 8b 45 08 a8 01 75 RSP: 0018:ffffc9000cb3fb60 EFLAGS: 00000202 RAX: 0000000000000011 RBX: ffff8883bc4469c0 RCX: ffffed10776e9949 RDX: 0000000000000002 RSI: ffff8883bb74ca48 RDI: ffffffff8434dc50 RBP: ffff8883bb74ca40 R08: ffff888103585dc0 R09: ffff8884533a1800 R10: 0000000000000004 R11: ffffffffffffffff R12: ffffed1077888d39 R13: dffffc0000000000 R14: ffffed1077888d38 R15: 0000000000000003 FS: 0000000000000000(0000) GS:ffff8883bc400000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00005577b5c8d158 CR3: 0000000004850000 CR4: 0000000000350ef0 Rastreo de llamadas: ? watchdog_timer_fn+0x2cd/0x390 ? __pfx_watchdog_timer_fn+0x10/0x10 ? __hrtimer_run_queues+0x300/0x6d0 ? sched_clock_cpu+0x69/0x4e0 ? __pfx___hrtimer_run_queues+0x10/0x10 ? srso_return_thunk+0x5/0x5f ? ktime_get_update_offsets_now+0x7f/0x2a0 ? srso_return_thunk+0x5/0x5f ? srso_return_thunk+0x5/0x5f ? hrtimer_interrupt+0x2ca/0x760 ? __sysvec_apic_timer_interrupt+0x8c/0x2b0 ? sysvec_apic_timer_interrupt+0x6a/0x90 ? asm_sysvec_apic_timer_interrupt+0x16/0x20 ? smp_call_function_many_cond+0x1d8/0xbb0 ? __pfx_do_kernel_range_flush+0x10/0x10 en_cada_m\u00e1scara_de_cond_de_cpu+0x20/0x40 flush_tlb_kernel_range+0x19b/0x250 ? srso_return_thunk+0x5/0x5f ? kasan_release_vmalloc+0xa7/0xc0 purga_vmap_node+0x357/0x820 ? __pfx_purge_vmap_node+0x10/0x10 __purge_vmap_area_lazy+0x5b8/0xa10 drenaje_vmap_area_work+0x21/0x30 proceso_uno_work+0x661/0x10b0 subproceso_trabajador+0x844/0x10e0 ? srso_return_thunk+0x5/0x5f ? __kthread_parkme+0x82/0x140 ? __pfx_worker_thread+0x10/0x10 kthread+0x2a5/0x370 ? __pfx_kthread+0x10/0x10 ret_from_fork+0x30/0x70 ? __pfx_kthread+0x10/0x10 ret_from_fork_asm+0x1a/0x30 An\u00e1lisis de depuraci\u00f3n: 1. El siguiente registro de ftrace muestra que la CPU de bloqueo pasa demasiado tiempo iterando vmap_nodes y vaciando TLB al purgar estructuras vm_area. (Parte de la informaci\u00f3n est\u00e1 recortada). kworker: funcgraph_entry: | drain_vmap_area_work() { kworker: funcgraph_entry: | mutex_lock() { kworker: funcgraph_entry: 1.092 us | __cond_resched(); kworker: funcgraph_exit: 3.306 us | } ... ... kworker: funcgraph_entry: | flush_tlb_kernel_range() { ... ... kworker: funcgraph_exit: # 7533.649 us | } ... ... kworker: funcgraph_entry: 2.344 us | mutex_unlock(); kworker: funcgraph_exit: $ 23871554 us | } La funci\u00f3n drain_vmap_area_work() tarda m\u00e1s de 23 segundos. Hay 2805 llamadas a flush_tlb_kernel_range() en el registro de ftrace. * Una se llama en __purge_vmap_area_lazy(). * Las dem\u00e1s se llaman mediante purge_vmap_node-&gt;kasan_release_vmalloc. purge_vmap_node() libera iterativamente las asignaciones de vmalloc de kasan y vac\u00eda la TLB para cada vmap_area. - [C\u00e1lculo aproximado] Cada flush_tlb_kernel_range() se ejecuta alrededor de 7,5 ms. -- 2804 * 7,5 ms = 21,03 segundos. -- Por eso se activa un bloqueo suave. 2. Extender el tiempo de bloqueo suave puede solucionar el problema (por ejemplo, # echo ---truncated---"
}
],
"metrics": {},
"references": [
{
"url": "https://git.kernel.org/stable/c/9e9e085effe9b7e342138fde3cf8577d22509932",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
},
{
"url": "https://git.kernel.org/stable/c/f9a18889aad9b4c19c6c4550c67ad4f9ed2a354f",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
}
]
}