qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 14/19] block: Defer .bdrv_drain_begin callback t


From: Max Reitz
Subject: Re: [Qemu-devel] [PATCH 14/19] block: Defer .bdrv_drain_begin callback to polling phase
Date: Wed, 27 Jun 2018 16:30:17 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0

On 2018-04-11 18:39, Kevin Wolf wrote:
> We cannot allow aio_poll() in bdrv_drain_invoke(begin=true) until we're
> done with propagating the drain through the graph and are doing the
> single final BDRV_POLL_WHILE().
> 
> Just schedule the coroutine with the callback and increase bs->in_flight
> to make sure that the polling phase will wait for it.
> 
> Signed-off-by: Kevin Wolf <address@hidden>
> ---
>  block/io.c | 28 +++++++++++++++++++++++-----
>  1 file changed, 23 insertions(+), 5 deletions(-)

According to bisect, this breaks blockdev-snapshot with QED:

$ ./qemu-img create -f qed foo.qed 64M
Formatting 'foo.qed', fmt=qed size=67108864 cluster_size=65536
$ echo "{'execute':'qmp_capabilities'}
        {'execute':'blockdev-snapshot',
         'arguments':{'node':'backing','overlay':'overlay'}}
        {'execute':'quit'}" | \
    x86_64-softmmu/qemu-system-x86_64 -qmp stdio -nodefaults \
        -blockdev "{'node-name':'backing','driver':'null-co'}" \
        -blockdev "{'node-name':'overlay','driver':'qed',
                    'file':{'driver':'file','filename':'foo.qed'}}"
{"QMP": {"version": {"qemu": {"micro": 50, "minor": 12, "major": 2},
"package": "v2.12.0-1422-g0109e7e6f8"}, "capabilities": []}}
{"return": {}}
qemu-system-x86_64: block.c:3434: bdrv_replace_node: Assertion
`!atomic_read(&to->in_flight)' failed.
[1]    5252 done                 echo  |
       5253 abort (core dumped)  x86_64-softmmu/qemu-system-x86_64 -qmp
stdio -nodefaults -blockdev  -blockdev

Max

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]