[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 0/7] block: Attempt on fixing 030-reported errors

From: Hanna Reitz
Subject: [PATCH 0/7] block: Attempt on fixing 030-reported errors
Date: Thu, 4 Nov 2021 11:38:42 +0100


I’ve tried to investigate what causes the iotest 030 to fail.  Here’s
what I found:

(1) stream_prepare() gets the base node by looking up the node below
    above_base.  It then invokes bdrv_cor_filter_drop(), before we
    actually use the base node.
    bdrv_cor_filter_drop() modifies the block graph, which means
    draining, which means other parties might modify the graph, too.
    Therefore, afterwards, the node below above_base might be completely
    different, and the base node we got before might already be gone.

(2) bdrv_replace_child_noperm() can set BdrvChild.bs to NULL.  That’s
    problematic, because most of our code cannot deal with BdrvChild
    objects whose .bs pointer is NULL.  We assume that such objects are
    immediately removed from the BDS.children list, and that they won’t
    appear under bs->backing or bs->file (i.e. that those pointers are
    immediately NULLed when bs->{backing,file}->bs is NULLed).
    After setting BdrvChild.bs to NULL, bdrv_replace_child_noperm() may
    invoke bdrv_parent_drained_end_single() on the BdrvChild.
    Therefore, other code is run at that point, and it might not be
    ready to encounter something like
    `bs->backing != NULL && bs->backing->bs == NULL`.

(3) 030 in one case launches four stream jobs concurrently, all with
    speed=1024.  It then unthrottles them one after each other, but the
    problem is that if one job finishes, the jobs above it will be
    advanced by a step (which is actually 512k); so since we unthrottle
    bottom to top, it’s possible that all jobs below the top job are
    finished before we get to unthrottle the top job.  This will advance
    the top job so far (3 * 512k + 512k = 2M) that it actually finishes
    despite still being throttled.  Attempting to unthrottle it then
    throws an error.

Here’s how I think we can solve these problems:

(1) Invoke bdrv_cor_filter_drop() first, then get the base node
    afterwards, when the graph will no longer change.
    Implemented in patch 1.

(2A) bdrv_replace_child_noperm() should immediately set bs->file or
     bs->backing to NULL when it sets bs->{file,backing}->bs to NULL.
     It should also immediately remove any BdrvChild with .bs == NULL
     from the parent’s BDS.children list.
     Implemented in patches 2 through 6.

(2B) Alternatively, we could always keep the whole subgraph drained
     while we manipulate it.  Then, the bdrv_parent_drained_end_single()
     in bdrv_replace_child_noperm() wouldn’t do anything.
     To fix 030, we would need to add a drained section to
     stream_prepare(): Namely we’d need to drain the subgraph below the
     COR filter node.
     This would be a much simpler solution, but I don’t feel like it’s
     the right one.

(3) Just unthrottle the jobs from bottom to top instead of top to

As you can see, I’m not sure which of 2A or 2B is the right solution.  I
decided to investigate both: 2A was much more complicated, but seemed
like the right thing to do; 2B is much simpler, but doesn’t feel as
right.  Therefore, I decided to go with 2A in this first version of this

Hanna Reitz (7):
  stream: Traverse graph after modification
  block: Manipulate children list in .attach/.detach
  block: Unite remove_empty_child and child_free
  block: Drop detached child from ignore list
  block: Pass BdrvChild ** to replace_child_noperm
  block: Let replace_child_noperm free children
  iotests/030: Unthrottle parallel jobs in reverse

 block.c                | 178 +++++++++++++++++++++++++++++------------
 block/stream.c         |   7 +-
 tests/qemu-iotests/030 |  11 ++-
 3 files changed, 144 insertions(+), 52 deletions(-)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]