qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] block/stream: Drain subtree around graph change


From: Kevin Wolf
Subject: Re: [PATCH] block/stream: Drain subtree around graph change
Date: Tue, 5 Apr 2022 12:14:04 +0200

Am 24.03.2022 um 13:57 hat Hanna Reitz geschrieben:
> When the stream block job cuts out the nodes between top and base in
> stream_prepare(), it does not drain the subtree manually; it fetches the
> base node, and tries to insert it as the top node's backing node with
> bdrv_set_backing_hd().  bdrv_set_backing_hd() however will drain, and so
> the actual base node might change (because the base node is actually not
> part of the stream job) before the old base node passed to
> bdrv_set_backing_hd() is installed.
> 
> This has two implications:
> 
> First, the stream job does not keep a strong reference to the base node.
> Therefore, if it is deleted in bdrv_set_backing_hd()'s drain (e.g.
> because some other block job is drained to finish), we will get a
> use-after-free.  We should keep a strong reference to that node.
> 
> Second, even with such a strong reference, the problem remains that the
> base node might change before bdrv_set_backing_hd() actually runs and as
> a result the wrong base node is installed.
> 
> Both effects can be seen in 030's TestParallelOps.test_overlapping_5()
> case, which has five nodes, and simultaneously streams from the middle
> node to the top node, and commits the middle node down to the base node.
> As it is, this will sometimes crash, namely when we encounter the
> above-described use-after-free.
> 
> Taking a strong reference to the base node, we no longer get a crash,
> but the resuling block graph is less than ideal: The expected result is
> obviously that all middle nodes are cut out and the base node is the
> immediate backing child of the top node.  However, if stream_prepare()
> takes a strong reference to its base node (the middle node), and then
> the commit job finishes in bdrv_set_backing_hd(), supposedly dropping
> that middle node, the stream job will just reinstall it again.
> 
> Therefore, we need to keep the whole subtree drained in
> stream_prepare()

That doesn't sound right. I think in reality it's "if we take the really
big hammer and drain the whole subtree, then the bit that we really need
usually happens to be covered, too".

When you have a long backing chain and merge the two topmost overlays
with streaming, then it's none of the stream job's business whether
there is I/O going on for the base image way down the chain. Subtree
drains do much more than they should in this case.

At the same time they probably do too little, because what you're
describing you're protecting against is not I/O, but graph modifications
done by callbacks invoked in the AIO_WAIT_WHILE() when replacing the
backing file. The callback could be invoked by I/O on an entirely
different subgraph (maybe if the other thing is a mirror job) or it
could be a BH or anything else really. bdrv_drain_all() would increase
your chances, but I'm not sure if even that would be guaranteed to be
enough - because it's really another instance of abusing drain for
locking, we're not really interested in the _I/O_ of the node.

> so that the graph modification it performs is effectively atomic,
> i.e. that the base node it fetches is still the base node when
> bdrv_set_backing_hd() sets it as the top node's backing node.

I think the way to keep graph modifications atomic is avoid polling in
the middle. Not even running any callbacks is a lot safer than trying to
make sure there can't be undesired callbacks that want to run.

So probably adding drain (or anything else that polls) in
bdrv_set_backing_hd() was a bad idea. It could assert that the parent
node is drained, but it should be the caller's responsibility to do so.

What streaming completion should look like is probably something like
this:

    1. Drain above_base, this also drains all parents up to the top node
       (needed because in-flight I/O using an edge that is removed isn't
       going to end well)

    2. Without any polling involved:
        a. Find base (it can't change without polling)
        b. Update top->backing to point to base

    3. End of drain.

You don't have to keep extra references or deal with surprise removals
of nodes because the whole thing is atomic when you don't poll. Other
threads can't interfere either because graph modification requires the
BQL.

There is no reason to keep base drained because its I/O doesn't
interfere with the incoming edge that we're changing.

I think all of this is really relevant for Emanuele's work, which
involves adding AIO_WAIT_WHILE() deep inside graph update functions. I
fully expect that we would see very similar problems, and just stacking
drain sections over drain sections that might happen to usually fix
things, but aren't guaranteed to, doesn't look like a good solution.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]