qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-ppc] [PATCH 5/5] target-ppc: Handle cases when mu


From: Alexander Graf
Subject: Re: [Qemu-devel] [Qemu-ppc] [PATCH 5/5] target-ppc: Handle cases when multi-processors get machine-check
Date: Thu, 28 Aug 2014 10:42:21 +0200
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:31.0) Gecko/20100101 Thunderbird/31.0


On 28.08.14 08:56, Aravinda Prasad wrote:
> 
> 
> On Wednesday 27 August 2014 04:10 PM, Alexander Graf wrote:
>>
>>
>> On 25.08.14 15:45, Aravinda Prasad wrote:
>>> It is possible for multi-processors to experience machine
>>> check at or about the same time. As per PAPR, subsequent
>>> processors serialize waiting for the first processor to
>>> issue the ibm,nmi-interlock call.
>>>
>>> The second processor retries if the first processor which
>>> received a machine check is still reading the error log
>>> and is yet to issue ibm,nmi-interlock call.
>>>
>>> This patch implements this functionality.
>>>
>>> Signed-off-by: Aravinda Prasad <address@hidden>
>>
>> This patch doesn't make any sense. Both threads will issue an HCALL
>> which will get locked inside of QEMU, so we'll never see the case where
>> both hypercalls get processed at the same time.
> 
> AFAIK, only one thread can succeed entering qemu upon parallel hcall
> from different guest CPUs as it is gated by a lock. Hence one hcall is
> processed at a time.
> 
> As per PAPR, we don't want any other KVMPPC_H_REPORT_ERR hcall to be
> processed at the same time and further KVMPPC_H_REPORT_ERR hcall thus
> issued should wait until the OS issues ibm,nmi-interlock.

Oh, now I understand. The locking time is from
[h_report_mc_err...rtas_ibm_nmi_interlock].

This should definitely go into the comment on the check in
h_report_mc_err. In fact, remove the fact that only one thread can
execute and instead write where the lock gets unset and that during that
phase only one vcpu may process the NMI.


Alex



reply via email to

[Prev in Thread] Current Thread [Next in Thread]