qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [PATCH v3 2/5] ppc: spapr: Handle "ibm, nmi-register" and


From: Aravinda Prasad
Subject: Re: [Qemu-ppc] [PATCH v3 2/5] ppc: spapr: Handle "ibm, nmi-register" and "ibm, nmi-interlock" RTAS calls
Date: Thu, 21 Sep 2017 14:39:06 +0530
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0


On Tuesday 22 August 2017 07:38 AM, David Gibson wrote:

[ . . . ]

>>>> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
>>>> index 46012b3..eee8d33 100644
>>>> --- a/include/hw/ppc/spapr.h
>>>> +++ b/include/hw/ppc/spapr.h
>>>> @@ -123,6 +123,12 @@ struct sPAPRMachineState {
>>>>       * occurs during the unplug process. */
>>>>      QTAILQ_HEAD(, sPAPRDIMMState) pending_dimm_unplugs;
>>>>  
>>>> +    /* State related to "ibm,nmi-register" and "ibm,nmi-interlock" calls 
>>>> */
>>>> +    target_ulong guest_machine_check_addr;
>>>> +    bool mc_in_progress;
>>>> +    int mc_cpu;
>>>
>>> mc_cpu isn't actually used yet in this patch.  In any case it and
>>> mc_in_progress could probably be folded together, no?
>>
>> It is possible to fold mc_cpu and mc_in_progress together with the
>> convention that if it is set to -1 mc is not in progress otherwise it is
>> set to the CPU handling the mc.
>>
>>>
>>> These values will also need to be migrated, AFAICT.
>>
>> I am thinking of how to handle the migration when machine check handling
>> is in progress. Probably wait for machine check handling to complete
>> before migrating as the error could be irrelevant once migrated to a new
>> hardware. If that is the case we don't need to migrate these values.
> 
> Ok.

This is what I think about handling machine check during migration based
on my understanding of the VM migration code.

There are two possibilities here. First, migration can be initiated
while the machine check handling is in progress. Second, A machine check
error can happen when the migration is in progress.

To handle the first case we can add migrate_add_blocker() call when we
start handling the machine check error and issue migrate_del_blocker()
when done. I think this should solve the issue.

The second case is bit tricky. The migration has already started and
hence migrate_add_blocker() call will fail. We also cannot wait till the
completion of the migration to handle machine check error as the VM's
data could be corrupt.

Machine check errors should not be an issue when the migration is in the
RAM copy phase as VM is still active with vCPUs running. The problem is
when we hit a machine check when the migration is about to complete. For
example,

1. vCPU2 hits a machine check error during migration.

2. KVM causes VM exit on vCPU2 and the NIP of vCPU2 is changed to the
guest registered machine check handler.

3. The migration_completion() issues vm_stop() and hence either vCPU2 is
never scheduled again on the source hardware or vCPU2 is preempted while
executing the machine check handler.

4. vCPU2 is resumed on the target hardware and either starts or
continues processing the machine check error. This could be a problem as
these errors are specific to the source hardware. For instance, when the
the guest issues memory poisoning upon such error, a clean page on the
target hardware is poisoned while the corrupt page on source hardware is
not poisoned.

The second case of hitting machine check during the final phase of
migration is rare but wanted to check what others think about it.

Regards,
Aravinda

> 
>>
>> Regards,
>> Aravinda
>>
>>>
>>>> +    QemuCond mc_delivery_cond;
>>>> +
>>>>      /*< public >*/
>>>>      char *kvm_type;
>>>>      MemoryHotplugState hotplug_memory;
>>>> @@ -519,8 +525,10 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, 
>>>> target_ulong opcode,
>>>>  #define RTAS_IBM_CREATE_PE_DMA_WINDOW           (RTAS_TOKEN_BASE + 0x27)
>>>>  #define RTAS_IBM_REMOVE_PE_DMA_WINDOW           (RTAS_TOKEN_BASE + 0x28)
>>>>  #define RTAS_IBM_RESET_PE_DMA_WINDOW            (RTAS_TOKEN_BASE + 0x29)
>>>> +#define RTAS_IBM_NMI_REGISTER                   (RTAS_TOKEN_BASE + 0x2A)
>>>> +#define RTAS_IBM_NMI_INTERLOCK                  (RTAS_TOKEN_BASE + 0x2B)
>>>>  
>>>> -#define RTAS_TOKEN_MAX                          (RTAS_TOKEN_BASE + 0x2A)
>>>> +#define RTAS_TOKEN_MAX                          (RTAS_TOKEN_BASE + 0x2C)
>>>>  
>>>>  /* RTAS ibm,get-system-parameter token values */
>>>>  #define RTAS_SYSPARM_SPLPAR_CHARACTERISTICS      20
>>>>
>>>
>>
> 

-- 
Regards,
Aravinda




reply via email to

[Prev in Thread] Current Thread [Next in Thread]