qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 00/13] KVM: mm: fd-based approach for supporting KVM guest


From: Quentin Perret
Subject: Re: [PATCH v5 00/13] KVM: mm: fd-based approach for supporting KVM guest private memory
Date: Fri, 1 Apr 2022 18:03:16 +0000

On Friday 01 Apr 2022 at 17:14:21 (+0000), Sean Christopherson wrote:
> On Fri, Apr 01, 2022, Quentin Perret wrote:
> > The typical flow is as follows:
> > 
> >  - the host asks the hypervisor to run a guest;
> > 
> >  - the hypervisor does the context switch, which includes switching
> >    stage-2 page-tables;
> > 
> >  - initially the guest has an empty stage-2 (we don't require
> >    pre-faulting everything), which means it'll immediately fault;
> > 
> >  - the hypervisor switches back to host context to handle the guest
> >    fault;
> > 
> >  - the host handler finds the corresponding memslot and does the
> >    ipa->hva conversion. In our current implementation it uses a longterm
> >    GUP pin on the corresponding page;
> > 
> >  - once it has a page, the host handler issues a hypercall to donate the
> >    page to the guest;
> > 
> >  - the hypervisor does a bunch of checks to make sure the host owns the
> >    page, and if all is fine it will unmap it from the host stage-2 and
> >    map it in the guest stage-2, and do some bookkeeping as it needs to
> >    track page ownership, etc;
> > 
> >  - the guest can then proceed to run, and possibly faults in many more
> >    pages;
> > 
> >  - when it wants to, the guest can then issue a hypercall to share a
> >    page back with the host;
> > 
> >  - the hypervisor checks the request, maps the page back in the host
> >    stage-2, does more bookkeeping and returns back to the host to notify
> >    it of the share;
> > 
> >  - the host kernel at that point can exit back to userspace to relay
> >    that information to the VMM;
> > 
> >  - rinse and repeat.
> 
> I assume there is a scenario where a page can be converted from 
> shared=>private?
> If so, is there a use case where that happens post-boot _and_ the contents of 
> the
> page are preserved?

I think most our use-cases are private=>shared, but how is that
different?

> > We currently don't allow the host punching holes in the guest IPA space.
> 
> The hole doesn't get punched in guest IPA space, it gets punched in the 
> private
> backing store, which is host PA space.

Hmm, in a previous message I thought that you mentioned when a whole
gets punched in the fd KVM will go and unmap the page in the private
SPTEs, which will cause a fatal error for any subsequent access from the
guest to the corresponding IPA?

If that's correct, I meant that we currently don't support that - the
host can't unmap anything from the guest stage-2, it can only tear it
down entirely. But again, I'm not too worried about that, we could
certainly implement that part without too many issues.

> > Once it has donated a page to a guest, it can't have it back until the
> > guest has been entirely torn down (at which point all of memory is
> > poisoned by the hypervisor obviously).
> 
> The guest doesn't have to know that it was handed back a different page.  It 
> will
> require defining the semantics to state that the trusted hypervisor will clear
> that page on conversion, but IMO the trusted hypervisor should be doing that
> anyways.  IMO, forcing on the guest to correctly zero pages on conversion is
> unnecessarily risky because converting private=>shared and preserving the 
> contents
> should be a very, very rare scenario, i.e. it's just one more thing for the 
> guest
> to get wrong.

I'm not sure I agree. The guest is going to communicate with an
untrusted entity via that shared page, so it better be careful. Guest
hardening in general is a major topic, and of all problems, zeroing the
page before sharing is probably one of the simplest to solve.

Also, note that in pKVM all the hypervisor code at EL2 runs with
preemption disabled, which is a strict constraint. As such one of the
main goals is the spend as little time as possible in that context.
We're trying hard to keep the amount of zeroing/memcpy-ing to an
absolute minimum. And that's especially true as we introduce support for
huge pages. So, we'll take every opportunity we get to have the guest
or the host do that work.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]