qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4] blockjob: Fix crash with IOthread when block commit after


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH v4] blockjob: Fix crash with IOthread when block commit after snapshot
Date: Mon, 1 Feb 2021 13:27:43 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0

Hi!

Tanks for fixing and sorry for a delay!

Please send each new version of a patch as a separate branch. It's a rule from 
https://wiki.qemu.org/Contribute/SubmitAPatch and it is more readable and less 
probable that your patch will be missed.

28.01.2021 04:30, 08005325@163.com wrote:
From: Michael Qiu <qiudayu@huayun.com>

v4: rebase to latest code

v3: reformat the commit log, remove duplicate content

v2: modify the coredump backtrace within commit log with the newest
     qemu with master branch

Such things shouldn't be in a commit message. You may put such comments after 
--- line[*] in a generated patch email


Currently, if guest has workloads, IO thread will acquire aio_context
lock before do io_submit, it leads to segmentfault when do block commit
after snapshot. Just like below:

Do you have some reproducer script?


Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f7c7d91f700 (LWP 99907)]
0x00005576d0f65aab in bdrv_mirror_top_pwritev at ../block/mirror.c:1437
1437    ../block/mirror.c: No such file or directory.
(gdb) p s->job
$17 = (MirrorBlockJob *) 0x0
(gdb) p s->stop
$18 = false

(gdb) bt

Switch to qemu main thread:
/lib/../lib64/libpthread.so.0
/lib/../lib64/libpthread.so.0
../util/qemu-thread-posix.c:79
qapi/qapi-commands-block-core.c:346
../qapi/qmp-dispatch.c:110
/lib/../lib64/libglib-2.0.so.0

Not very informative bt..


In IO thread when do bdrv_mirror_top_pwritev, the job is NULL, and stop field
is false, this means the MirrorBDSOpaque "s" object has not been initialized 
yet,
and this object is initialized by block_job_create(), but the initialize
process is stuck in acquiring the lock.

Could you show another thread bt?

Hm, so you argue that we come to bdrv_mirror_top_pwritev() (which means that
mirror-top node is already inserted into block graph), but its bs->opaque is
not initialized?

Hmm, really in mirror_start_job we do insert mirror_top_bs before 
block_job_create() call.

But we should do that all in a drained section, so that no parallel io requests 
may come.

And we have a drained section but it finishes immediately after bdrv_append, 
when
bs_opaque is still not initialized. Probably we just need to expand it?


 May be:

diff --git a/block/mirror.c b/block/mirror.c
index 8e1ad6eceb..0a6bfc1230 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -1610,11 +1610,11 @@ static BlockJob *mirror_start_job(
     bdrv_ref(mirror_top_bs);
     bdrv_drained_begin(bs);
     bdrv_append(mirror_top_bs, bs, &local_err);
-    bdrv_drained_end(bs);
if (local_err) {
         bdrv_unref(mirror_top_bs);
         error_propagate(errp, local_err);
+        bdrv_drained_end(bs);
         return NULL;
     }
@@ -1789,6 +1789,8 @@ static BlockJob *mirror_start_job(
     trace_mirror_start(bs, s, opaque);
     job_start(&s->common.job);
+ bdrv_drained_end(bs);
+
     return &s->common;
fail:
@@ -1813,6 +1815,8 @@ fail:
bdrv_unref(mirror_top_bs); + bdrv_drained_end(bs);
+
     return NULL;
 }

Could you check, does it help?



The rootcause is that qemu do release/acquire when hold the lock,
at the same time, IO thread get the lock after release stage, and the crash
occured.

Actually, in this situation, job->job.aio_context will not equal to
qemu_get_aio_context(), and will be the same as bs->aio_context,
thus, no need to release the lock, becasue bdrv_root_attach_child()
will not change the context.

This patch fix this issue.

Signed-off-by: Michael Qiu <qiudayu@huayun.com>
---

[*] here you could add any comments, which will not go into final commit 
message, like version history.

  blockjob.c | 6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/blockjob.c b/blockjob.c
index 98ac8af982..51a09f3b60 100644
--- a/blockjob.c
+++ b/blockjob.c
@@ -214,13 +214,15 @@ int block_job_add_bdrv(BlockJob *job, const char *name, 
BlockDriverState *bs,
      BdrvChild *c;
bdrv_ref(bs);
-    if (job->job.aio_context != qemu_get_aio_context()) {
+    if (bdrv_get_aio_context(bs) != job->job.aio_context &&
+        job->job.aio_context != qemu_get_aio_context()) {
          aio_context_release(job->job.aio_context);
      }
      c = bdrv_root_attach_child(bs, name, &child_job, 0,
                                 job->job.aio_context, perm, shared_perm, job,
                                 errp);
-    if (job->job.aio_context != qemu_get_aio_context()) {
+    if (bdrv_get_aio_context(bs) != job->job.aio_context &&
+        job->job.aio_context != qemu_get_aio_context()) {

that's a wrong check, it will never reacquire the lock on success path, as 
after successful attach, bs context would definitely equal to job context.

I think you need a boolean variable at start of function, initialized to the 
condition, and after _attach_child() you not recheck the condition but rely on 
variable.

          aio_context_acquire(job->job.aio_context);
      }
      if (c == NULL) {


The code was introduced by Kevin in 132ada80c4a6 "block: Adjust AioContexts when 
attaching nodes", so I think we need his opinion.
You also may add "Fixes: 132ada80c4a6fea7b67e8bb0a5fd299994d927c6", especially 
if you check that your case doesn't fail before this commit.

I think the idea itself is correct, as bdrv_root_attach_child will not call any 
of *_set_aio_*, and no reason to release the lock. So it shouldn't hurt and 
it's great if it fixes some crash.

When side effect of a function is temporary releasing some lock, it's hard to 
be sure that all callers are OK with it...


--
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]