qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 2/8] python/machine: Handle QMP errors on close more metic


From: John Snow
Subject: Re: [PATCH v5 2/8] python/machine: Handle QMP errors on close more meticulously
Date: Wed, 27 Oct 2021 13:49:26 -0400

This reply is long, sorry.

On Wed, Oct 27, 2021 at 7:19 AM Kevin Wolf <kwolf@redhat.com> wrote:
Am 26.10.2021 um 19:56 hat John Snow geschrieben:
> To use the AQMP backend, Machine just needs to be a little more diligent
> about what happens when closing a QMP connection. The operation is no
> longer a freebie in the async world; it may return errors encountered in
> the async bottom half on incoming message receipt, etc.
>
> (AQMP's disconnect, ultimately, serves as the quiescence point where all
> async contexts are gathered together, and any final errors reported at
> that point.)
>
> Because async QMP continues to check for messages asynchronously, it's
> almost certainly likely that the loop will have exited due to EOF after
> issuing the last 'quit' command. That error will ultimately be bubbled
> up when attempting to close the QMP connection. The manager class here
> then is free to discard it -- if it was expected.
>
> Signed-off-by: John Snow <jsnow@redhat.com>
> Reviewed-by: Hanna Reitz <hreitz@redhat.com>
> ---
>  python/qemu/machine/machine.py | 48 +++++++++++++++++++++++++++++-----
>  1 file changed, 42 insertions(+), 6 deletions(-)
>
> diff --git a/python/qemu/machine/machine.py b/python/qemu/machine/machine.py
> index 0bd40bc2f76..a0cf69786b4 100644
> --- a/python/qemu/machine/machine.py
> +++ b/python/qemu/machine/machine.py
> @@ -342,9 +342,15 @@ def _post_shutdown(self) -> None:
>          # Comprehensive reset for the failed launch case:
>          self._early_cleanup()

> -        if self._qmp_connection:
> -            self._qmp.close()
> -            self._qmp_connection = None
> +        try:
> +            self._close_qmp_connection()
> +        except Exception as err:  # pylint: disable=broad-except
> +            LOG.warning(
> +                "Exception closing QMP connection: %s",
> +                str(err) if str(err) else type(err).__name__
> +            )
> +        finally:
> +            assert self._qmp_connection is None

>          self._close_qemu_log_file()

> @@ -420,6 +426,31 @@ def _launch(self) -> None:
>                                         close_fds=False)
>          self._post_launch()

> +    def _close_qmp_connection(self) -> None:
> +        """
> +        Close the underlying QMP connection, if any.
> +
> +        Dutifully report errors that occurred while closing, but assume
> +        that any error encountered indicates an abnormal termination
> +        process and not a failure to close.
> +        """
> +        if self._qmp_connection is None:
> +            return
> +
> +        try:
> +            self._qmp.close()
> +        except EOFError:
> +            # EOF can occur as an Exception here when using the Async
> +            # QMP backend. It indicates that the server closed the
> +            # stream. If we successfully issued 'quit' at any point,
> +            # then this was expected. If the remote went away without
> +            # our permission, it's worth reporting that as an abnormal
> +            # shutdown case.
> +            if not (self._user_killed or self._quit_issued):
> +                raise

Isn't this racy for those tests that expect QEMU to quit by itself and
then later call wait()? self._quit_issued is only set to True in wait(),
but whatever will cause QEMU to quit happens earlier and it might
actually quit before wait() is called.

_quit_issued is also set to True via qmp() and command(), but I think you're referring to cases where QEMU terminates for other reasons than an explicit command. So, yes, QEMU might indeed terminate/abort/quit before machine.py has recorded that fact somewhere. I wasn't aware of that being a problem. I suppose it'd be racy if, say, you orchestrated a "side-channel quit" and then didn't call wait() and instead called shutdown(). I think that's the case you're worried about? But, this code should ultimately be called in only four cases:

(1) Connection failure
(2) shutdown()
(3) shutdown(), via wait()
(4) shutdown(), via kill()

I had considered it a matter of calling the correct exit path from the test code. If shutdown() does what the label on the tin says (it shuts down the VM), I actually would expect it to be racy if QEMU was in the middle of deconstructing itself. That's the belief behind changing around iotest 300 the way I did.
 
It would make sense to me that such tests need to declare that they
expect QEMU to quit before actually performing the action. And then
wait() becomes less weird in patch 1, too, because it can just assert
self._quit_issued instead of unconditionally setting it.

That's a thought. I was working on the model that calling wait() implies the fact that the writer is expecting it to terminate through some mechanism otherwise invisible to machine.py (HMP perhaps, a QGA action, migration failure, etc.) It's one less call vs. saying "expect_shutdown(); wait_shutdown()". I wasn't aware of a circumstance where you'd want to separate these two semantic bits of info ("I expect QEMU will shut down" / "I am waiting for QEMU to shut down") and since the latter implied the former, I rolled 'em up into one.

I suppose in your case, you're wondering what happens if we decide to call shutdown() instead of wait() after orchestrating a "side-channel quit"? I *would* have considered that a semantic error in the author's code, but I suppose it's reasonable to expect that "shutdown() should DTRT". Though, in order for it to DTRT, you have to inform machine that you're expecting that side-channel termination, and if you already forgot to use wait() instead of shutdown(), I'm not sure I can help the author remember that they should have issued a "expect_shutdown()" or similar.

The other point I'm unsure is whether you can actually kill QEMU without
getting either self._user_killed or self._quit_issued set. The
potentially problematic case I see is shutdown(hard = False) where soft
shutdown fails. Then a hard shutdown will be performed without setting
self._user_killed (is this a bug?).

The original intent of _user_killed was specifically to record cases where the test-writer *intentionally* killed the VM process. We then used that flag to change the error reporting on cleanup in _post_shutdown. The idea was that if the QEMU process terminated due to signal and we didn't explicitly intentionally cause it, that we should loudly report that occurrence. We added this specifically to catch problems on the exit path in various iotests. So, the fact that soft shutdown fails over to a hard shutdown but doesn't set that intent flag is the intended behavior.

So, what about _close_qmp_connection()? I figured that if any of the following were true:

1) quit was explicitly issued via qmp() or command()
2) wait() was called instead of shutdown, implying an anticipated side-channel termination
3) kill() or shutdown(hard=True) was called

... then we were expecting the remote to hang up on us, and we didn't need to report that error.

What about when the VM is killed but it wasn't explicitly intentional by the test-writer? As far as machine.py knows, it was unintentional and so it will allow that error to trickle up when _close_qmp_connection() is called. The server hung up on us and we have no idea why. The error will be reported upwards. From the context of just this one function, that seems like the correct behavior.

Of course, sending the 'quit' command in _soft_shutdown() will set
self._quit_issued at least, but are we absolutely sure that it can never
raise an exception before getting to qmp()? I guess in theory, closing
the console socket in _early_cleanup() could raise one? (But either way,
not relying on such subtleties would make the code easier to
understand.)

If it does raise an exception before getting to qmp(), we'll fail over to the hard shutdown path.

shutdown()
  _do_shutdown()
    _soft_shutdown()
      _early_cleanup() -- Let's say this one chokes.

So we'll likely have _quit_issued = False and _user_killed = False.

We fail over:

shutdown()
  _do_shutdown()
    _hard_shutdown()
      _early_cleanup() -- Uh oh, if it failed once, it might fail again.
      self._subp.kill()
      self._subp.wait(timeout=60)

I'm going to ignore the _early_cleanup() problem here for a moment, as it's a pre-existing problem and I'll fix it separately. Let's just continue the hypothetical.

shutdown()
  _post_shutdown()
    _early_cleanup()  -- Uh oh for a third time.
    _close_qmp_connection()
    ...

At this point, we're finally going to try to close this connection.

        try:
            self._close_qmp_connection()
        except Exception as err:  # pylint: disable=broad-except                                                                        
            LOG.warning(
                "Exception closing QMP connection: %s",
                str(err) if str(err) else type(err).__name__
            )
        finally:
            assert self._qmp_connection is None

And the grand result is that it's very likely just going to log a warning to the error stream saying that the QMP connection closed in a weird way -- possibly EOFError, possibly ECONNABORTED, ECONNRESET, EPIPE.The rest of cleanup will continue, and the exception that was stored prior to executing the "finally:" phase of shutdown() will now be raised, which would be:

raise AbnormalShutdown("Could not perform graceful shutdown") from exc

where 'exc' is the theoretical exception from _soft_shutdown > _early_cleanup.

Ultimately, we'll see an error log for the QMP connection terminating in a strange way, followed by a stack trace for our failure to gracefully shut down the VM (because the early cleanup code failed.) I think this is probably a reasonable way to report this circumstance -- the machine.py code has no idea what's going on, so it (as best as it can) just reports the errors it sees. This makes the cleanup code a little "eager to fail", but for a testing suite I thought that was actually appropriate behavior.

I think this is at least functional as written, ultimately? The trick is remembering to call wait() instead of shutdown() when you need it, but I don't see a way to predict or track the intent of the test-writer any better than that, so I don't know how to improve the usability of the library around that case, exactly.

--js
 

Kevin


reply via email to

[Prev in Thread] Current Thread [Next in Thread]