qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: bdrv_drained_begin deadlock with io-threads


From: Dietmar Maurer
Subject: Re: bdrv_drained_begin deadlock with io-threads
Date: Wed, 1 Apr 2020 17:50:30 +0200 (CEST)

> On April 1, 2020 5:37 PM Dietmar Maurer <address@hidden> wrote:
> 
>  
> > > I really nobody else able to reproduce this (somebody already tried to 
> > > reproduce)?
> > 
> > I can get hangs, but that's for job_completed(), not for starting the
> > job. Also, my hangs have a non-empty bs->tracked_requests, so it looks
> > like a different case to me.
> 
> Please can you post the command line args of your VM? I use something like
> 
> ./x86_64-softmmu/qemu-system-x86_64 -chardev 
> 'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 
> 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/101.pid  -m 1024 
> -object 'iothread,id=iothread-virtioscsi0' -device 
> 'virtio-scsi-pci,id=virtioscsi0,iothread=iothread-virtioscsi0' -drive 
> 'file=/backup/disk3/debian-buster.raw,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on'
>  -device 
> 'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0'
>  -machine "type=pc,accel=kvm"

BTW, I get a segfault if I start above VM without "accel=kvm", 

gdb --args ./x86_64-softmmu/qemu-system-x86_64 -chardev 
'socket,id=qmp,path=/var/run/qemu-server/101.qmp,server,nowait' -mon 
'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/101.pid  -m 1024 
-object 'iothread,id=iothread-virtioscsi0' -device 
'virtio-scsi-pci,id=virtioscsi0,iothread=iothread-virtioscsi0' -drive 
'file=/backup/disk3/debian-buster.raw,if=none,id=drive-scsi0,format=raw,cache=none,aio=native,detect-zeroes=on'
 -device 
'scsi-hd,bus=virtioscsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0'
 -machine "type=pc"

after a few seconds:

Thread 3 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe857e700 (LWP 22257)]
0x000055555587c130 in do_tb_phys_invalidate (tb=tb@entry=0x7fffa7b40500 
<code_gen_buffer+3409107>, 
    rm_from_page_list=rm_from_page_list@entry=true)
    at /home/dietmar/pve5-devel/mirror_qemu/accel/tcg/translate-all.c:1483
1483        atomic_set(&tcg_ctx->tb_phys_invalidate_count,
(gdb) bt
#0  0x000055555587c130 in do_tb_phys_invalidate
    (tb=tb@entry=0x7fffa7b40500 <code_gen_buffer+3409107>, 
rm_from_page_list=rm_from_page_list@entry=true)
    at /home/dietmar/pve5-devel/mirror_qemu/accel/tcg/translate-all.c:1483
#1  0x000055555587c53b in tb_phys_invalidate__locked (tb=0x7fffa7b40500 
<code_gen_buffer+3409107>)
    at /home/dietmar/pve5-devel/mirror_qemu/accel/tcg/translate-all.c:1960
#2  0x000055555587c53b in tb_invalidate_phys_page_range__locked
    (pages=pages@entry=0x7fffe780d400, p=0x7fff651066a0, 
start=start@entry=1072709632, end=end@entry=1072713728, 
retaddr=retaddr@entry=0) at 
/home/dietmar/pve5-devel/mirror_qemu/accel/tcg/translate-all.c:1960
#3  0x000055555587dad1 in tb_invalidate_phys_range (start=1072709632, 
end=1072771072)
    at /home/dietmar/pve5-devel/mirror_qemu/accel/tcg/translate-all.c:2036
#4  0x0000555555801c12 in invalidate_and_set_dirty
    (mr=<optimized out>, addr=<optimized out>, length=<optimized out>)
    at /home/dietmar/pve5-devel/mirror_qemu/exec.c:3036
#5  0x00005555558072df in address_space_unmap
    (as=<optimized out>, buffer=<optimized out>, len=<optimized out>, 
is_write=<optimized out>, access_len=65536)
    at /home/dietmar/pve5-devel/mirror_qemu/exec.c:3571
#6  0x0000555555967ff6 in dma_memory_unmap
    (access_len=<optimized out>, dir=<optimized out>, len=<optimized out>, 
buffer=<optimized out>, as=<optimized out>) at 
/home/dietmar/pve5-devel/mirror_qemu/include/sysemu/dma.h:145
#7  0x0000555555967ff6 in dma_blk_unmap (dbs=dbs@entry=0x7fffe7839220) at 
dma-helpers.c:104
#8  0x0000555555968394 in dma_complete (ret=0, dbs=0x7fffe7839220) at 
dma-helpers.c:116
#9  0x0000555555968394 in dma_blk_cb (opaque=0x7fffe7839220, ret=0) at 
dma-helpers.c:136
#10 0x0000555555bac78e in blk_aio_complete (acb=0x7fffe783da00) at 
block/block-backend.c:1339
#11 0x0000555555c7280b in coroutine_trampoline (i0=<optimized out>, 
i1=<optimized out>)
    at util/coroutine-ucontext.c:115
#12 0x00007ffff6176b50 in __correctly_grouped_prefixwc
    (begin=0x7fffa7b40240 <code_gen_buffer+3408403> L"\x3ff0497b", end=0x12 
<error: Cannot access memory at address 0x12>, thousands=0 L'\000', 
grouping=0x7fffa7b40590 <code_gen_buffer+3409251> "\001") at grouping.c:171
#13 0x0000000000000000 in  ()


It runs fine without iothreads.

But I guess this is a totally different problem?




reply via email to

[Prev in Thread] Current Thread [Next in Thread]