[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bdrv_drained_begin deadlock with io-threads

From: Vladimir Sementsov-Ogievskiy
Subject: Re: bdrv_drained_begin deadlock with io-threads
Date: Tue, 31 Mar 2020 17:53:58 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1

31.03.2020 17:32, Dietmar Maurer wrote:
After a few iteration the VM freeze inside bdrv_drained_begin():

Thread 1 (Thread 0x7fffe9291080 (LWP 30949)):
#0  0x00007ffff5cb3916 in __GI_ppoll (fds=0x7fff63d30c40, nfds=2, 
timeout=<optimized out>, timeout@entry=0x0, sigmask=sigmask@entry=0x0) at 
#1  0x0000555555c60419 in ppoll (__ss=0x0, __timeout=0x0, __nfds=<optimized out>, 
__fds=<optimized out>) at /usr/include/x86_64-linux-gnu/bits/poll2.h:77
#2  0x0000555555c60419 in qemu_poll_ns (fds=<optimized out>, nfds=<optimized 
out>, timeout=timeout@entry=-1) at util/qemu-timer.c:335
#3  0x0000555555c62c01 in fdmon_poll_wait (ctx=0x7fffe8905e80, 
ready_list=0x7fffffffd3a8, timeout=-1) at util/fdmon-poll.c:79
#4  0x0000555555c621e7 in aio_poll (ctx=0x7fffe8905e80, 
blocking=blocking@entry=true) at util/aio-posix.c:589
#5  0x0000555555bc2565 in bdrv_do_drained_begin (poll=<optimized out>, 
ignore_bds_parents=false, parent=0x0, recursive=false, bs=0x7fff67240b80) at 
#6  0x0000555555bc2565 in bdrv_do_drained_begin (bs=0x7fff67240b80, recursive=<optimized 
out>, parent=0x0, ignore_bds_parents=<optimized out>, poll=<optimized out>) at 
#7  0x0000555555bde252 in bdrv_backup_top_drop (bs=0x7fff67240b80) at 
#8  0x0000555555bd995c in backup_clean (job=0x7fffe5609200) at 
#9  0x0000555555b6e08d in job_clean (job=0x7fffe5609200) at job.c:657
#10 0x0000555555b6e08d in job_finalize_single (job=0x7fffe5609200) at job.c:673
#11 0x0000555555b6e08d in job_finalize_single (job=0x7fffe5609200) at job.c:661
#12 0x0000555555b6ea3a in job_completed_txn_abort (job=<optimized out>) at 
#13 0x0000555555b6ec42 in job_completed (job=0x7fffe5609200) at job.c:843
#14 0x0000555555b6ec42 in job_completed (job=0x7fffe5609200) at job.c:836
#15 0x0000555555b6edf0 in job_exit (opaque=0x7fffe5609200) at job.c:864
#16 0x0000555555c5e975 in aio_bh_call (bh=0x7fffe721a2d0) at util/async.c:164
#17 0x0000555555c5e975 in aio_bh_poll (ctx=ctx@entry=0x7fffe8905e80) at 
#18 0x0000555555c6202e in aio_dispatch (ctx=0x7fffe8905e80) at 
#19 0x0000555555c5e85e in aio_ctx_dispatch (source=<optimized out>, callback=<optimized 
out>, user_data=<optimized out>) at util/async.c:298
#20 0x00007ffff7c7ef2e in g_main_context_dispatch () at 
#21 0x0000555555c61298 in glib_pollfds_poll () at util/main-loop.c:219
#22 0x0000555555c61298 in os_host_main_loop_wait (timeout=<optimized out>) at 
#23 0x0000555555c61298 in main_loop_wait (nonblocking=nonblocking@entry=0) at 
#24 0x00005555558fc5a9 in qemu_main_loop () at 
#25 0x0000555555800c3e in main (argc=<optimized out>, argv=<optimized out>, 
envp=<optimized out>) at /home/dietmar/pve5-devel/mirror_qemu/softmmu/main.c:49

The thing that we need to figure out is probably what operation we are
(or were) waiting for and why it doesn't complete.

A typical cause of a hang like this is that the operation we're waiting
for actually did already complete in a different thread, but forgot to
call aio_wait_kick().

no, that is not the case.

To confirm, if you have gdb attached to a hanging process, you could
manually call bdrv_drain_poll_top_level(bs, 0, 0) and check its result.
If it returns false,

it return true, because there are about 30 in_flight request (block_backend). 
But poll does not
get any events ...

then a kick is missing somewhere. If it returns
true, some request is still waiting for completion somewhere and can be
inspected in gdb.

How can I see/debug those waiting request?

Examine bs->tracked_requests list.

BdrvTrackedRequest has "Coroutine *co" field. It's a pointer of coroutine of 
this request. You may use qemu-gdb script to print request's coroutine back-trace:

gdb> source qemu_source/scripts/qemu-gdb.py

gdb> qemu coroutine CO_POINTER

- this will show, what exactly the request is doing now/waiting for.

Thread 1 locks the BQL, while thread7 wants to aquire it to complete the read 
in prepare_mmio_access():

(gdb) source ./scripts/qemu-gdb.py
(gdb) qemu tcg-lock-status
Thread, BQL (iothread_mutex), Replay, Blocked?
10/31149, false, false, not blocked
8/30998, false, false, __lll_lock_wait waiting on 0x555556142e60 
<qemu_global_mutex> from 30949
7/30997, false, false, __lll_lock_wait waiting on 0x555556142e60 
<qemu_global_mutex> from 30949
6/30996, false, false, not blocked
4/30955, false, false, not blocked
3/30954, false, false, __lll_lock_wait waiting on 0x7fffe89151e0 from 30949
2/30953, false, false, not blocked
1/30949, true, false, not blocked

(gdb) thread apply 7 bt
Thread 7 (Thread 0x7fff669ff700 (LWP 30997)):
#0  0x00007ffff5d9729c in __lll_lock_wait () at 
#1  0x00007ffff5d90714 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x555556142e60 
<qemu_global_mutex>) at ../nptl/pthread_mutex_lock.c:80
#2  0x0000555555c65013 in qemu_mutex_lock_impl (mutex=0x555556142e60 <qemu_global_mutex>, 
file=0x555555c94008 "/home/dietmar/pve5-devel/mirror_qemu/exec.c", line=3089) at 
#3  0x00005555558480ee in qemu_mutex_lock_iothread_impl (file=file@entry=0x555555c94008 
"/home/dietmar/pve5-devel/mirror_qemu/exec.c", line=line@entry=3089)
     at /home/dietmar/pve5-devel/mirror_qemu/cpus.c:1831
#4  0x0000555555802425 in prepare_mmio_access (mr=<optimized out>, mr=<optimized 
out>) at /home/dietmar/pve5-devel/mirror_qemu/exec.c:3089
#5  0x000055555580759f in flatview_read_continue
     (fv=fv@entry=0x7fff5f9baf00, addr=addr@entry=375, attrs=..., 
ptr=ptr@entry=0x7ffff7fbe000, len=len@entry=1, addr1=<optimized out>, l=<optimized 
out>, mr=0x7fffe8e66e80)
     at /home/dietmar/pve5-devel/mirror_qemu/exec.c:3188
#6  0x0000555555807803 in flatview_read (fv=0x7fff5f9baf00, addr=375, 
attrs=..., buf=0x7ffff7fbe000, len=1) at 
#7  0x000055555580791b in address_space_read_full (as=<optimized out>, addr=<optimized 
out>, attrs=..., buf=<optimized out>, len=<optimized out>) at 
#8  0x0000555555807a25 in address_space_rw (as=<optimized out>, addr=addr@entry=375, 
attrs=..., attrs@entry=..., buf=<optimized out>, len=len@entry=1, 
     at /home/dietmar/pve5-devel/mirror_qemu/exec.c:3270
#9  0x0000555555865024 in kvm_handle_io (count=1, size=1, direction=<optimized out>, 
data=<optimized out>, attrs=..., port=375) at 
#10 0x0000555555865024 in kvm_cpu_exec (cpu=cpu@entry=0x7fffe89efb00) at 
#11 0x000055555584835e in qemu_kvm_cpu_thread_fn (arg=0x7fffe89efb00) at 
#12 0x000055555584835e in qemu_kvm_cpu_thread_fn (arg=arg@entry=0x7fffe89efb00) 
at /home/dietmar/pve5-devel/mirror_qemu/cpus.c:1218
#13 0x0000555555c64e1a in qemu_thread_start (args=<optimized out>) at 
#14 0x00007ffff5d8dfa3 in start_thread (arg=<optimized out>) at 
#15 0x00007ffff5cbe4cf in clone () at 

So the pattern is:

1.) Main thread has the BQL
2.) Main thread calls bdrv_drained_begin(bs);
3.) CPU thread want to complete read prepare_mmio_access(), but needs BQL
4.) deadlock

Any ideas how to prevent that?

What's going on in the vcpu thread (thread 7) is probably not part of
the problem. It does have to wait for the main thread, but it would only
really be part of the deadlock if the main thread would also wait for
the vcpu thread, which I don't think it does.

Yes, I guess you are right. This read is triggered by the guest (not by 

Best regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]