[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 1/2] target/arm: kvm: Handle DABT with no valid ISS

From: Andrew Jones
Subject: Re: [PATCH v4 1/2] target/arm: kvm: Handle DABT with no valid ISS
Date: Mon, 27 Apr 2020 08:18:27 +0200

On Sat, Apr 25, 2020 at 11:24:14AM +0200, Paolo Bonzini wrote:
> On 24/04/20 14:16, Dr. David Alan Gilbert wrote:
> >>> I was trying to work out whether we need to migrate this state,
> >>> and I'm not sure. Andrew, do you know? I think this comes down
> >>> to "at what points in QEMU's kvm run loop can migration kick in",
> >>> and specifically if we get a KVM_EXIT_ARM_NISV do we definitely
> >>> go round the loop and KVM_RUN again without ever checking
> >>> to see if we should do a migration ?
> >>>
> >> I'd prefer a migration expert confirm this, so I've CC'ed David and Juan,
> >> but afaict there's no way to break out of the KVM_RUN loop after a
> >> successful (ret=0) call to kvm_arch_handle_exit() until after the next
> >> KVM_RUN ioctl. This is because even if migration kicks the vcpus between
> >> kvm_arch_handle_exit() and the next run, the signal won't do anything
> >> other than prepare the vcpu for an immediate exit.
> As far as QEMU is concerned, this should be enough for Beata's patch to
> be safe.  If the signal causes KVM to exit before KVM_EXIT_ARM_NISV,
> it's of course okay.  If you get a KVM_EXIT_ARM_NISV, however, KVM_RUN
> will exit with return code 0 and kvm_cpu_exec will:
> - set env->ext_dabt_pending
> - go round the loop again
> - notice cpu->exit_request and schedule an immediate exit
> - call kvm_arch_put_registers
> - call KVM_RUN again, which will exit with -EINTR
> - exit the loop and allow migration to proceed

This was my understanding as well. Thanks for the confirmation.

> However, I'm not sure that it's a good idea to
> +        /* Clear instantly if the call was successful */
> +        env->ext_dabt_pending = 0;
> Rather, this should be done by the next kvm_arch_get_registers when it
> calls KVM_GET_VCPU_EVENTS.  It's also possible to add an assertion in
> kvm_get_vcpu_events that it you always get zero, to justify that the
> field is not migrated.

I like the idea of a balanced API; keeping the 'set' abort pending
until the next 'get', but this event doesn't work that way.
Documentation/virt/kvm/api.rst states:

  It is not possible to read back a pending external abort (injected via
  KVM_SET_VCPU_EVENTS or otherwise) because such an exception is always
  delivered directly to the virtual CPU).

I think clearing the userspace copy instantly after a successful
KVM_SET_VCPU_EVENTS implements that correctly.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]