qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC] adding a generic QAPI event for failed device hotunplug


From: Daniel Henrique Barboza
Subject: Re: [RFC] adding a generic QAPI event for failed device hotunplug
Date: Mon, 8 Mar 2021 15:01:53 -0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.8.0



On 3/8/21 2:04 PM, Markus Armbruster wrote:
Daniel Henrique Barboza <danielhb413@gmail.com> writes:

On 3/6/21 3:57 AM, Markus Armbruster wrote:
Cc: ACPI maintainers for additional expertise.

Daniel Henrique Barboza <danielhb413@gmail.com> writes:

Hi,

Recent changes in pseries code (not yet pushed, available at David's
ppc-for-6.0) are using the QAPI event MEM_UNPLUG_ERROR to report memory
hotunplug errors in the pseries machine.

The pseries machine is also using a timeout to cancel CPU hotunplugs that
takes too long to finish (in which we're assuming a guest side error) and
it would be desirable to also send a QAPI event for this case as well.

At this moment, there is no "CPU_UNPLUG_ERROR" in QAPI (guess ACPI doesn't
need it).

I see two interpretations of "ACPI doesn't need":

1. Unplug can't fail, or QEMU can't detect failure.  Michael, Igor?

2. Management applications haven't needed to know badly enough to
implement an event.

            Before sending patches to implement this new event I had a talk
with David Gibson and he suggested that, instead of adding a new 
CPU_UNPLUG_ERROR
event, we could add a generic event (e.g. DEVICE_UNPLUG_ERROR) that can be
used by the pseries machine in both error scenarios (MEM and CPU).

Good point.  One general event is better than two special ones that
could easily grow siblings.

This could also be used by x86 as well, although I believe the use of
MEM_UNPLUG_ERROR would need to be kept for awhile to avoid breaking ABI.

Yes.  Our rules for interface deprecation apply.

Any suggestions/comments?

We should document the event's reliability.  Are there unplug operations
where we can't detect failure?  Are there unplug operations where we
could, but haven't implemented the event?

The current version of the PowerPC spec that the pseries machine implements
(LOPAR) does not predict a way for the virtual machine to report a hotunplug
error back to the hypervisor. If something fails, the hypervisor is left
in the dark.

What happened in the 6.0.0 dev cycle is that we faced a reliable way of

Do you mean "unreliable way"?

I guess a better word would be 'reproducible', as in we discovered a 
reproducible
way of getting the guest kernel to refuse the CPU hotunplug.


making CPU hotunplug fail in the guest (trying to hotunplug the last online
CPU) and the pseries machine was unprepared for dealing with it. We ended up
implementing a hotunplug timeout and, if the timeout kicks in, we're assuming
that the CPU hotunplug failed in the guest. This is the first scenario we have
today where we want to send a QAPI event informing the CPU hotunplug error,
but this event does not exist in QEMU ATM.

When the timeout kicks in, how can you know the operation failed?  You
better be sure when you send an error event.  In other words: what
prevents the scenario where the operation is much slower than you
expect, the timeout expires, the error event is sent, and then the
operation completes successfully?

A CPU hotunplug in a pseries guest takes no more than a couple of seconds, even
if the guest is under heavy load. The timeout is set to 15 seconds.

I'm aware that there's always that special use case, that we haven't seen yet,
where this assumption is no longer valid. The plan is to change the spec and the
guest kernel to signal the CPU hotunplug error back to QEMU before the dragon
lands. For now, timing out the CPU hotunplug process when we're almost certain
(but not 100%) that it failed in the guest is the best can do.

For both cases I want to use DEVICE_UNPLUG_ERROR right from the start, avoiding
guest visible changes when we change how we're detecting the unplug errors.


The second scenario is a memory hotunplug error. I found out that the pseries
guest kernel does a reconfiguration step when re-attaching the DIMM right
after refusing the hotunplug, and this reconfiguration is visible to QEMU.
I proceeded to make the pseries machine detect this error case, rollback the
unplug operation and fire up the MEM_UNPLUG_ERROR. This case is already covered
by QAPI, but if we add a DEVICE_UNPLUG_ERROR event I would use it in this case 
as
well instead of the MEM specific one.

This investigation and work in the mem hotunplug error path triggered a
discussion in qemu-ppc, where we're considering whether we should do the same
signalling the kernel does for the DIMM hotunplug error for all other device
hotunplug errors, given that the reconfiguration per se is not forbidden by 
LOPAR
and it's currently a no-op. We would make a LOPAR spec change to make this an
official hotunplug error report mechanism, and all pseries hotunplug operations,
for all devices, would report DEVICE_UNPLUG_ERROR QAPI events in the error path.

Granted, the spec change + Kernel change is not something that we will be able
to nail down in the 6.0.0 cycle, but having the DEVICE_UNPLUG_ERROR QAPI event
already in place would make it easier for the future as well.


I have a doc draft of these changes/infos that I forgot to post. I would post
it as docs/system/ppc-spapr-hotunplug-notes.rst. I can add the QAPI events
information there as well. Does that work for you as far as documentation
goes?

A DEVICE_UNPLUG_ERROR event makes perfect sense regardless of machine
and device.

I'm not asking you to to implement it for all machines and devices.  But
I want its design to support growth towards that goal, and its
documentation reflect its current state.

In particular, the doc comment in the QAPI schema should list the
limitation.  If the event is only implemented for LOPAR for now, say so.
If it's only implemented for certain devices, say so.

Questions?


Nope. Thanks for the pointers. I'll add the DEVICE_UNPLUG_ERROR QAPI event
in a way that it can be used by other machines in the future, and documenting
where the event is being used ATM.


Thanks,


DHB






reply via email to

[Prev in Thread] Current Thread [Next in Thread]