You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If controller reset occurs when allocating namespace, both
nvme_reset_work and nvme_scan_work will hang, as shown below.
Test Scripts:
for ((t=1;t<=128;t++))
do
nsid=`nvme create-ns /dev/nvme1 -s 14537724 -c 14537724 -f 0 -m 0 \
-d 0 | awk -F: '{print($NF);}'`
nvme attach-ns /dev/nvme1 -n $nsid -c 0
done
nvme reset /dev/nvme1
We will find that both nvme_reset_work and nvme_scan_work hung:
INFO: task kworker/u249:4:17848 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
message.
task:kworker/u249:4 state:D stack: 0 pid:17848 ppid: 2
flags:0x00000028
Workqueue: nvme-reset-wq nvme_reset_work [nvme]
Call trace:
__switch_to+0xb4/0xfc
__schedule+0x22c/0x670
schedule+0x4c/0xd0
blk_mq_freeze_queue_wait+0x84/0xc0
nvme_wait_freeze+0x40/0x64 [nvme_core]
nvme_reset_work+0x1c0/0x5cc [nvme]
process_one_work+0x1d8/0x4b0
worker_thread+0x230/0x440
kthread+0x114/0x120
INFO: task kworker/u249:3:22404 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
message.
task:kworker/u249:3 state:D stack: 0 pid:22404 ppid: 2
flags:0x00000028
Workqueue: nvme-wq nvme_scan_work [nvme_core]
Call trace:
__switch_to+0xb4/0xfc
__schedule+0x22c/0x670
schedule+0x4c/0xd0
rwsem_down_write_slowpath+0x32c/0x98c
down_write+0x70/0x80
nvme_alloc_ns+0x1ac/0x38c [nvme_core]
nvme_validate_or_alloc_ns+0xbc/0x150 [nvme_core]
nvme_scan_ns_list+0xe8/0x2e4 [nvme_core]
nvme_scan_work+0x60/0x500 [nvme_core]
process_one_work+0x1d8/0x4b0
worker_thread+0x260/0x440
kthread+0x114/0x120
INFO: task nvme:28428 blocked for more than 120 seconds.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
message.
task:nvme state:D stack: 0 pid:28428 ppid: 27119
flags:0x00000000
Call trace:
__switch_to+0xb4/0xfc
__schedule+0x22c/0x670
schedule+0x4c/0xd0
schedule_timeout+0x160/0x194
do_wait_for_common+0xac/0x1d0
__wait_for_common+0x78/0x100
wait_for_completion+0x24/0x30
__flush_work.isra.0+0x74/0x90
flush_work+0x14/0x20
nvme_reset_ctrl_sync+0x50/0x74 [nvme_core]
nvme_dev_ioctl+0x1b0/0x250 [nvme_core]
__arm64_sys_ioctl+0xa8/0xf0
el0_svc_common+0x88/0x234
do_el0_svc+0x7c/0x90
el0_svc+0x1c/0x30
el0_sync_handler+0xa8/0xb0
el0_sync+0x148/0x180
The reason for the hang is that nvme_reset_work occurs while nvme_scan_work
is still running. nvme_scan_work may add new ns into ctrl->namespaces
list after nvme_reset_work frozen all ns->q in ctrl->namespaces list.
The newly added ns is not frozen, so nvme_wait_freeze will wait forever.
Unfortunately, ctrl->namespaces_rwsem is held by nvme_reset_work, so
nvme_scan_work will also wait forever. Now we are deadlocked!
PROCESS1 PROCESS2
============== ==============
nvme_scan_work
... nvme_reset_work
nvme_validate_or_alloc_ns nvme_dev_disable
nvme_alloc_ns nvme_start_freeze
down_write ...
nvme_ns_add_to_ctrl_list ...
up_write nvme_wait_freeze
... down_read
nvme_alloc_ns blk_mq_freeze_queue_wait
down_write
Fix by marking the ctrl with say NVME_CTRL_FROZEN flag set in
nvme_start_freeze and cleared in nvme_unfreeze. Then the scan can check
it before adding the new namespace (under the namespaces_rwsem).
Signed-off-by: Bitao Hu <yaoma@linux.alibaba.com>
Reviewed-by: Guixin Liu <kanie@linux.alibaba.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
0 commit comments