qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] block-backend: prevent dangling BDS pointers across aio_p


From: Hanna Reitz
Subject: Re: [PATCH v2] block-backend: prevent dangling BDS pointers across aio_poll()
Date: Mon, 10 Jan 2022 19:57:05 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.3.0

On 14.12.21 15:35, Stefan Hajnoczi wrote:
The BlockBackend root child can change when aio_poll() is invoked. This
happens when a temporary filter node is removed upon blockjob
completion, for example.

Functions in block/block-backend.c must be aware of this when using a
blk_bs() pointer across aio_poll() because the BlockDriverState refcnt
may reach 0, resulting in a stale pointer.

One example is scsi_device_purge_requests(), which calls blk_drain() to
wait for in-flight requests to cancel. If the backup blockjob is active,
then the BlockBackend root child is a temporary filter BDS owned by the
blockjob. The blockjob can complete during bdrv_drained_begin() and the
last reference to the BDS is released when the temporary filter node is
removed. This results in a use-after-free when blk_drain() calls
bdrv_drained_end(bs) on the dangling pointer.

By the way, I have a BZ for this, though it’s about block-stream instead of backup (https://bugzilla.redhat.com/show_bug.cgi?id=2036178).  But I’m happy to report your patch seems* to fix that problem, too!  (Thanks for fixing my BZs! :))

*I’ve written a reproducer in iotest form (https://gitlab.com/hreitz/qemu/-/blob/stefans-fix-and-a-test/tests/qemu-iotests/tests/stream-error-on-reset), and so far I can only assume it indeed reproduces the report, but I found that iotest to indeed be fixed by this patch.  (Which made me very happy.)

Hanna

Explicitly hold a reference to bs across block APIs that invoke
aio_poll().

Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
---
v2:
- Audit block/block-backend.c and fix additional cases
---
  block/block-backend.c | 11 +++++++++++
  1 file changed, 11 insertions(+)

diff --git a/block/block-backend.c b/block/block-backend.c
index 12ef80ea17..a40ad7fa92 100644
--- a/block/block-backend.c
+++ b/block/block-backend.c
@@ -828,10 +828,12 @@ void blk_remove_bs(BlockBackend *blk)
      notifier_list_notify(&blk->remove_bs_notifiers, blk);
      if (tgm->throttle_state) {
          bs = blk_bs(blk);
+        bdrv_ref(bs);
          bdrv_drained_begin(bs);
          throttle_group_detach_aio_context(tgm);
          throttle_group_attach_aio_context(tgm, qemu_get_aio_context());
          bdrv_drained_end(bs);
+        bdrv_unref(bs);
      }
blk_update_root_state(blk);
@@ -1705,6 +1707,7 @@ void blk_drain(BlockBackend *blk)
      BlockDriverState *bs = blk_bs(blk);
if (bs) {
+        bdrv_ref(bs);
          bdrv_drained_begin(bs);
      }
@@ -1714,6 +1717,7 @@ void blk_drain(BlockBackend *blk) if (bs) {
          bdrv_drained_end(bs);
+        bdrv_unref(bs);
      }
  }
@@ -2044,10 +2048,13 @@ static int blk_do_set_aio_context(BlockBackend *blk, AioContext *new_context,
      int ret;
if (bs) {
+        bdrv_ref(bs);
+
          if (update_root_node) {
              ret = bdrv_child_try_set_aio_context(bs, new_context, blk->root,
                                                   errp);
              if (ret < 0) {
+                bdrv_unref(bs);
                  return ret;
              }
          }
@@ -2057,6 +2064,8 @@ static int blk_do_set_aio_context(BlockBackend *blk, 
AioContext *new_context,
              throttle_group_attach_aio_context(tgm, new_context);
              bdrv_drained_end(bs);
          }
+
+        bdrv_unref(bs);
      }
blk->ctx = new_context;
@@ -2326,11 +2335,13 @@ void blk_io_limits_disable(BlockBackend *blk)
      ThrottleGroupMember *tgm = &blk->public.throttle_group_member;
      assert(tgm->throttle_state);
      if (bs) {
+        bdrv_ref(bs);
          bdrv_drained_begin(bs);
      }
      throttle_group_unregister_tgm(tgm);
      if (bs) {
          bdrv_drained_end(bs);
+        bdrv_unref(bs);
      }
  }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]