qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH] test-bdrv-drain: keep graph manipulations out of corouti


From: Paolo Bonzini
Subject: Re: [RFC PATCH] test-bdrv-drain: keep graph manipulations out of coroutines
Date: Fri, 2 Dec 2022 18:22:43 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.5.0

On 12/2/22 14:42, Emanuele Giuseppe Esposito wrote:


Am 02/12/2022 um 14:27 schrieb Paolo Bonzini:
Changes to the BlockDriverState graph will have to take the
corresponding lock for writing, and therefore cannot be done
inside a coroutine.  Move them outside the test body.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
  tests/unit/test-bdrv-drain.c | 63 ++++++++++++++++++++++++++----------
  1 file changed, 46 insertions(+), 17 deletions(-)

diff --git a/tests/unit/test-bdrv-drain.c b/tests/unit/test-bdrv-drain.c
index 6ae44116fe79..d85083dd4f9e 100644
--- a/tests/unit/test-bdrv-drain.c
+++ b/tests/unit/test-bdrv-drain.c
@@ -199,25 +199,40 @@ static void do_drain_end_unlocked(enum drain_type 
drain_type, BlockDriverState *
      }
  }
+static BlockBackend *blk;
+static BlockDriverState *bs, *backing;
+
+static void test_drv_cb_init(void)
+{
+    blk = blk_new(qemu_get_aio_context(), BLK_PERM_ALL, BLK_PERM_ALL);
+    bs = bdrv_new_open_driver(&bdrv_test, "test-node", BDRV_O_RDWR,
+                              &error_abort);
+    blk_insert_bs(blk, bs, &error_abort);
+
+    backing = bdrv_new_open_driver(&bdrv_test, "backing", 0, &error_abort);
+    bdrv_set_backing_hd(bs, backing, &error_abort);
+}
+
+static void test_drv_cb_fini(void)

fini stands for "finito"? :)

No, for finish :) http://ftp.math.utah.edu/u/ma/hohn/linux/misc/elf/node3.html

Anyways, an alternative solution for this is also here (probably coming
from you too):
https://lists.nongnu.org/archive/html/qemu-devel/2022-03/msg03517.html

Much better. At least patches 7-8 from that series have to be salvaged, possibly 10 as well.

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]