28 lines
4.8 KiB
JSON
Raw Normal View History

{
"id": "CVE-2021-46987",
"sourceIdentifier": "416baaa9-dc9f-4396-8d5f-8c081fb06d67",
"published": "2024-02-28T09:15:37.583",
"lastModified": "2024-02-28T14:06:45.783",
"vulnStatus": "Awaiting Analysis",
"descriptions": [
{
"lang": "en",
"value": "In the Linux kernel, the following vulnerability has been resolved:\n\nbtrfs: fix deadlock when cloning inline extents and using qgroups\n\nThere are a few exceptional cases where cloning an inline extent needs to\ncopy the inline extent data into a page of the destination inode.\n\nWhen this happens, we end up starting a transaction while having a dirty\npage for the destination inode and while having the range locked in the\ndestination's inode iotree too. Because when reserving metadata space\nfor a transaction we may need to flush existing delalloc in case there is\nnot enough free space, we have a mechanism in place to prevent a deadlock,\nwhich was introduced in commit 3d45f221ce627d (\"btrfs: fix deadlock when\ncloning inline extent and low on free metadata space\").\n\nHowever when using qgroups, a transaction also reserves metadata qgroup\nspace, which can also result in flushing delalloc in case there is not\nenough available space at the moment. When this happens we deadlock, since\nflushing delalloc requires locking the file range in the inode's iotree\nand the range was already locked at the very beginning of the clone\noperation, before attempting to start the transaction.\n\nWhen this issue happens, stack traces like the following are reported:\n\n [72747.556262] task:kworker/u81:9 state:D stack: 0 pid: 225 ppid: 2 flags:0x00004000\n [72747.556268] Workqueue: writeback wb_workfn (flush-btrfs-1142)\n [72747.556271] Call Trace:\n [72747.556273] __schedule+0x296/0x760\n [72747.556277] schedule+0x3c/0xa0\n [72747.556279] io_schedule+0x12/0x40\n [72747.556284] __lock_page+0x13c/0x280\n [72747.556287] ? generic_file_readonly_mmap+0x70/0x70\n [72747.556325] extent_write_cache_pages+0x22a/0x440 [btrfs]\n [72747.556331] ? __set_page_dirty_nobuffers+0xe7/0x160\n [72747.556358] ? set_extent_buffer_dirty+0x5e/0x80 [btrfs]\n [72747.556362] ? update_group_capacity+0x25/0x210\n [72747.556366] ? cpumask_next_and+0x1a/0x20\n [72747.556391] extent_writepages+0x44/0xa0 [btrfs]\n [72747.556394] do_writepages+0x41/0xd0\n [72747.556398] __writeback_single_inode+0x39/0x2a0\n [72747.556403] writeback_sb_inodes+0x1ea/0x440\n [72747.556407] __writeback_inodes_wb+0x5f/0xc0\n [72747.556410] wb_writeback+0x235/0x2b0\n [72747.556414] ? get_nr_inodes+0x35/0x50\n [72747.556417] wb_workfn+0x354/0x490\n [72747.556420] ? newidle_balance+0x2c5/0x3e0\n [72747.556424] process_one_work+0x1aa/0x340\n [72747.556426] worker_thread+0x30/0x390\n [72747.556429] ? create_worker+0x1a0/0x1a0\n [72747.556432] kthread+0x116/0x130\n [72747.556435] ? kthread_park+0x80/0x80\n [72747.556438] ret_from_fork+0x1f/0x30\n\n [72747.566958] Workqueue: btrfs-flush_delalloc btrfs_work_helper [btrfs]\n [72747.566961] Call Trace:\n [72747.566964] __schedule+0x296/0x760\n [72747.566968] ? finish_wait+0x80/0x80\n [72747.566970] schedule+0x3c/0xa0\n [72747.566995] wait_extent_bit.constprop.68+0x13b/0x1c0 [btrfs]\n [72747.566999] ? finish_wait+0x80/0x80\n [72747.567024] lock_extent_bits+0x37/0x90 [btrfs]\n [72747.567047] btrfs_invalidatepage+0x299/0x2c0 [btrfs]\n [72747.567051] ? find_get_pages_range_tag+0x2cd/0x380\n [72747.567076] __extent_writepage+0x203/0x320 [btrfs]\n [72747.567102] extent_write_cache_pages+0x2bb/0x440 [btrfs]\n [72747.567106] ? update_load_avg+0x7e/0x5f0\n [72747.567109] ? enqueue_entity+0xf4/0x6f0\n [72747.567134] extent_writepages+0x44/0xa0 [btrfs]\n [72747.567137] ? enqueue_task_fair+0x93/0x6f0\n [72747.567140] do_writepages+0x41/0xd0\n [72747.567144] __filemap_fdatawrite_range+0xc7/0x100\n [72747.567167] btrfs_run_delalloc_work+0x17/0x40 [btrfs]\n [72747.567195] btrfs_work_helper+0xc2/0x300 [btrfs]\n [72747.567200] process_one_work+0x1aa/0x340\n [72747.567202] worker_thread+0x30/0x390\n [72747.567205] ? create_worker+0x1a0/0x1a0\n [72747.567208] kthread+0x116/0x130\n [72747.567211] ? kthread_park+0x80/0x80\n [72747.567214] ret_from_fork+0x1f/0x30\n\n [72747.569686] task:fsstress state:D stack: \n---truncated-
}
],
"metrics": {},
"references": [
{
"url": "https://git.kernel.org/stable/c/96157707c0420e3d3edfe046f1cc797fee117ade",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
},
{
"url": "https://git.kernel.org/stable/c/d5347827d0b4b2250cbce6eccaa1c81dc78d8651",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
},
{
"url": "https://git.kernel.org/stable/c/f9baa501b4fd6962257853d46ddffbc21f27e344",
"source": "416baaa9-dc9f-4396-8d5f-8c081fb06d67"
}
]
}