qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] block/stream: Drain subtree around graph change


From: Hanna Reitz
Subject: Re: [PATCH] block/stream: Drain subtree around graph change
Date: Tue, 5 Apr 2022 13:47:26 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.5.0

On 05.04.22 12:14, Kevin Wolf wrote:
Am 24.03.2022 um 13:57 hat Hanna Reitz geschrieben:
When the stream block job cuts out the nodes between top and base in
stream_prepare(), it does not drain the subtree manually; it fetches the
base node, and tries to insert it as the top node's backing node with
bdrv_set_backing_hd().  bdrv_set_backing_hd() however will drain, and so
the actual base node might change (because the base node is actually not
part of the stream job) before the old base node passed to
bdrv_set_backing_hd() is installed.

This has two implications:

First, the stream job does not keep a strong reference to the base node.
Therefore, if it is deleted in bdrv_set_backing_hd()'s drain (e.g.
because some other block job is drained to finish), we will get a
use-after-free.  We should keep a strong reference to that node.

Second, even with such a strong reference, the problem remains that the
base node might change before bdrv_set_backing_hd() actually runs and as
a result the wrong base node is installed.

Both effects can be seen in 030's TestParallelOps.test_overlapping_5()
case, which has five nodes, and simultaneously streams from the middle
node to the top node, and commits the middle node down to the base node.
As it is, this will sometimes crash, namely when we encounter the
above-described use-after-free.

Taking a strong reference to the base node, we no longer get a crash,
but the resuling block graph is less than ideal: The expected result is
obviously that all middle nodes are cut out and the base node is the
immediate backing child of the top node.  However, if stream_prepare()
takes a strong reference to its base node (the middle node), and then
the commit job finishes in bdrv_set_backing_hd(), supposedly dropping
that middle node, the stream job will just reinstall it again.

Therefore, we need to keep the whole subtree drained in
stream_prepare()
That doesn't sound right. I think in reality it's "if we take the really
big hammer and drain the whole subtree, then the bit that we really need
usually happens to be covered, too".

When you have a long backing chain and merge the two topmost overlays
with streaming, then it's none of the stream job's business whether
there is I/O going on for the base image way down the chain. Subtree
drains do much more than they should in this case.

Yes, see the discussion I had with Vladimir.  He convinced me that this can’t be an indefinite solution, but that we need locking for graph changes that’s separate from draining, because (1) those are different things, and (2) changing the graph should influence I/O as little as possible.

I found this the best solution to fix a known case of a use-after-free for 7.1, though.

At the same time they probably do too little, because what you're
describing you're protecting against is not I/O, but graph modifications
done by callbacks invoked in the AIO_WAIT_WHILE() when replacing the
backing file. The callback could be invoked by I/O on an entirely
different subgraph (maybe if the other thing is a mirror job)or it
could be a BH or anything else really. bdrv_drain_all() would increase
your chances, but I'm not sure if even that would be guaranteed to be
enough - because it's really another instance of abusing drain for
locking, we're not really interested in the _I/O_ of the node.

The most common instances of graph modification I see are QMP and block jobs finishing.  The former will not be deterred by draining, and we do know of one instance where that is a problem (see the bdrv_next() discussion).  Generally, it isn’t though.  (If it is, this case here won’t be the only thing that breaks.)

As for the latter, most block jobs are parents of the nodes they touch (stream is one notable exception with how it handles its base, and changing that did indeed cause us headache before), and so will at least be paused when a drain occurs on a node they touch.  Since pausing doesn’t affect jobs that have exited their main loop, there might be some problem with concurrent jobs that are also finished but yielding, but I couldn’t find such a case.

I’m not sure what you’re arguing for, so I can only assume.  Perhaps you’re arguing for reverting this patch, which I wouldn’t want to do, because at least it fixes the one known use-after-free case. Perhaps you’re arguing that we need something better, and then I completely agree.

so that the graph modification it performs is effectively atomic,
i.e. that the base node it fetches is still the base node when
bdrv_set_backing_hd() sets it as the top node's backing node.
I think the way to keep graph modifications atomic is avoid polling in
the middle. Not even running any callbacks is a lot safer than trying to
make sure there can't be undesired callbacks that want to run.

So probably adding drain (or anything else that polls) in
bdrv_set_backing_hd() was a bad idea. It could assert that the parent
node is drained, but it should be the caller's responsibility to do so.

What streaming completion should look like is probably something like
this:

     1. Drain above_base, this also drains all parents up to the top node
        (needed because in-flight I/O using an edge that is removed isn't
        going to end well)

     2. Without any polling involved:
         a. Find base (it can't change without polling)
         b. Update top->backing to point to base

     3. End of drain.

You don't have to keep extra references or deal with surprise removals
of nodes because the whole thing is atomic when you don't poll. Other
threads can't interfere either because graph modification requires the
BQL.

There is no reason to keep base drained because its I/O doesn't
interfere with the incoming edge that we're changing.

I think all of this is really relevant for Emanuele's work, which
involves adding AIO_WAIT_WHILE() deep inside graph update functions. I
fully expect that we would see very similar problems, and just stacking
drain sections over drain sections that might happen to usually fix
things, but aren't guaranteed to, doesn't look like a good solution.

I don’t disagree.  Well, I agree, actually.  But I don’t know what you’re proposing to actually do.  There is active discussion on how block graph changes should be handled on Emanuele’s series.

Hanna




reply via email to

[Prev in Thread] Current Thread [Next in Thread]