qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH RFC 1/4] spapr-hcall: take iothread l


From: Thomas Huth
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH RFC 1/4] spapr-hcall: take iothread lock during handler call
Date: Fri, 2 Sep 2016 12:06:56 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2

On 02.09.2016 08:32, Nikunj A Dadhania wrote:
> Signed-off-by: Nikunj A Dadhania <address@hidden>
> ---
>  hw/ppc/spapr_hcall.c | 11 +++++++++--
>  1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index e5eca67..daea7a0 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -1075,20 +1075,27 @@ target_ulong spapr_hypercall(PowerPCCPU *cpu, 
> target_ulong opcode,
>                               target_ulong *args)
>  {
>      sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
> +    target_ulong ret;
>  
>      if ((opcode <= MAX_HCALL_OPCODE)
>          && ((opcode & 0x3) == 0)) {
>          spapr_hcall_fn fn = papr_hypercall_table[opcode / 4];
>  
>          if (fn) {
> -            return fn(cpu, spapr, opcode, args);
> +            qemu_mutex_lock_iothread();
> +            ret = fn(cpu, spapr, opcode, args);
> +            qemu_mutex_unlock_iothread();
> +            return ret;
>          }
>      } else if ((opcode >= KVMPPC_HCALL_BASE) &&
>                 (opcode <= KVMPPC_HCALL_MAX)) {
>          spapr_hcall_fn fn = kvmppc_hypercall_table[opcode - 
> KVMPPC_HCALL_BASE];
>  
>          if (fn) {
> -            return fn(cpu, spapr, opcode, args);
> +            qemu_mutex_lock_iothread();
> +            ret = fn(cpu, spapr, opcode, args);
> +            qemu_mutex_unlock_iothread();
> +            return ret;
>          }
>      }

I think this will cause a deadlock when running on KVM since the lock is
already taken in kvm_arch_handle_exit() - which calls spapr_hypercall()!

 Thomas





reply via email to

[Prev in Thread] Current Thread [Next in Thread]