qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SEV guest attestation


From: Dr. David Alan Gilbert
Subject: Re: SEV guest attestation
Date: Thu, 25 Nov 2021 15:19:03 +0000
User-agent: Mutt/2.1.3 (2021-09-10)

* Daniel P. Berrangé (berrange@redhat.com) wrote:
> On Wed, Nov 24, 2021 at 06:29:07PM +0000, Dr. David Alan Gilbert wrote:
> > * Daniel P. Berrangé (berrange@redhat.com) wrote:
> > > On Wed, Nov 24, 2021 at 11:34:16AM -0500, Tyler Fanelli wrote:
> > > > Hi,
> > > > 
> > > > We recently discussed a way for remote SEV guest attestation through 
> > > > QEMU.
> > > > My initial approach was to get data needed for attestation through 
> > > > different
> > > > QMP commands (all of which are already available, so no changes required
> > > > there), deriving hashes and certificate data; and collecting all of this
> > > > into a new QMP struct (SevLaunchStart, which would include the VM's 
> > > > policy,
> > > > secret, and GPA) which would need to be upstreamed into QEMU. Once this 
> > > > is
> > > > provided, QEMU would then need to have support for attestation before a 
> > > > VM
> > > > is started. Upon speaking to Dave about this proposal, he mentioned that
> > > > this may not be the best approach, as some situations would render the
> > > > attestation unavailable, such as the instance where a VM is running in a
> > > > cloud, and a guest owner would like to perform attestation via QMP (a 
> > > > likely
> > > > scenario), yet a cloud provider cannot simply let anyone pass arbitrary 
> > > > QMP
> > > > commands, as this could be an issue.
> > > 
> > > As a general point, QMP is a low level QEMU implementation detail,
> > > which is generally expected to be consumed exclusively on the host
> > > by a privileged mgmt layer, which will in turn expose its own higher
> > > level APIs to users or other apps. I would not expect to see QMP
> > > exposed to anything outside of the privileged host layer.
> > > 
> > > We also use the QAPI protocol for QEMU guest agent commmunication,
> > > however, that is a distinct service from QMP on the host. It shares
> > > most infra with QMP but has a completely diffent command set. On the
> > > host it is not consumed inside QEMU, but instead consumed by a
> > > mgmt app like libvirt. 
> > > 
> > > > So I ask, does anyone involved in QEMU's SEV implementation have any 
> > > > input
> > > > on a quality way to perform guest attestation? If so, I'd be interested.
> > > 
> > > I think what's missing is some clearer illustrations of how this
> > > feature is expected to be consumed in some real world application
> > > and the use cases we're trying to solve.
> > > 
> > > I'd like to understand how it should fit in with common libvirt
> > > applications across the different virtualization management
> > > scenarios - eg virsh (command line),  virt-manger (local desktop
> > > GUI), cockpit (single host web mgmt), OpenStack (cloud mgmt), etc.
> > > And of course any non-traditional virt use cases that might be
> > > relevant such as Kata.
> > 
> > That's still not that clear; I know Alice and Sergio have some ideas
> > (cc'd).
> > There's also some standardisation efforts (e.g. 
> > https://www.potaroo.net/ietf/html/ids-wg-rats.html 
> > and https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html
> > ) - that I can't claim to fully understand.
> > However, there are some themes that are emerging:
> > 
> >   a) One use is to only allow a VM to access some private data once we
> > prove it's the VM we expect running in a secure/confidential system
> >   b) (a) normally involves requesting some proof from the VM and then
> > providing it some confidential data/a key if it's OK
> 
> I guess I'm wondering what the threat we're protecting against is,
> and / or which pieces of the stack we can trust ?

Yeh and that varies depending who you speak to.

> eg, if the host has 2 VMs running, we verify the 1st and provide
> its confidental data back to the host, what stops the host giving
> that dat to the 2nd non-verified VM ? 
> 
> Presumably the data has to be encrypted with a key that is uniquely
> tied to this specific boot attempt of the verified VM, and not
> accessible to any other VM, or to future boots of this VM ?

In the SEV/-ES case the attestation is uniquefied by a Nonce I think
and there's sometype of session key used (can't remember the details)
and the returning of the key to the VM is encrypted through that same
channel; so you know you're giving the key to the thing you attested.

However, since in SEV/ES you only measure the firmware (and number of
CPUs) all VMs look pretty much identical at that point - distinguishing
them relies either on:
  a) In the GRUB/OVMF case you are relying on the key you return to the
VM succesfully decrypting the disk and the embedded Grub being able to
load the kernel/initrd (You attested the embedded Grub, so you trust
it to do that)
  b) In the separate kernel/initrd case you do have the kernel command
line measured as well.

> >   c) RATs splits the problem up:
> >     
> > https://www.ietf.org/archive/id/draft-ietf-rats-architecture-00.html#name-architectural-overview
> >     I don't fully understand the split yet, but in principal there are
> > at least a few different things:
> > 
> >   d) The comms layer
> >   e) Something that validates the attestation message (i.e. the
> > signatures are valid, the hashes all add up etc)
> >   f) Something that knows what hashes to expect (i.e. oh that's a RHEL
> > 8.4 kernel, or that's a valid kernel command line)
> >   g) Something that holds some secrets that can be handed out if e & f
> > are happy.
> > 
> >   There have also been proposals (e.g. Intel HTTPA) for an attestable
> > connection after a VM is running; that's probably quite different from
> > (g) but still involves (e) & (f).
> > 
> > In the simpler setups d,e,f,g probably live in one place; but it's not
> > clear where they live - for example one scenario says that your cloud
> > management layer holds some of them, another says you don't trust your
> > cloud management layer and you keep them separate.
> 
> Yep, again I'm wondering what the specific threats are that we're
> trying to mitigate. Whether we trust the cloud mgmt APIs, but don't
> trust the compute hosts, or whether we trust neither the cloud
> mgmt APIs or the compute hosts.
> 
> If we don't trust the compute hosts, does that include the part
> of the cloud mgmt API that is  running on the compute host, or
> does that just mean the execution environment of the VM, or something
> else?

I think there's pretty good consensus you don't trust the compute host
at all.  How much of the rest of the cloud you trust varies
depending on who you ask.  Some suggest trusting one small part of the
cloud (some highly secure apparently trusted attestation box).
Some would rather not trust the cloud at all, so would want to do
attestation to do their own system;  the problem there is you have to do
an off-site attestation every time your VMs start.
Personally I think maybe a 2 level system would work;  you boot one [set
of ] VMs in the cloud that's attested to your offsite - and they then
run the attestation service for all your VMs in the cloud.

> > So I think all we're actually interested in at the moment, is (d) and
> > (e) and the way for (g) to get the secret back to the guest.
> > 
> > Unfortunately the comms and the contents of them varies heavily with
> > technology; in some you're talking to the qemu/hypervisor (SEV/SEV-ES)
> > while in some you're talking to the guest after boot (SEV-SNP/TDX maybe
> > SEV-ES in some cases).
> > 
> > So my expectation at the moment is libvirt needs to provide a transport
> > layer for the comms, to enable an external validator to retrieve the
> > measurements from the guest/hypervisor and provide data back if
> > necessary.  Once this shakes out a bit, we might want libvirt to be
> > able to invoke the validator; however I expect (f) and (g) to be much
> > more complex things that don't feel like they belong in libvirt.
> 
> Yep, I don't think (f) & (g) belong in libvirt, since libvirt is
> deployed per compute host, while (f) / (g) are something that is
> likely to be deployed in a separate trusted host, at least for
> data center / cloud deployments. May be there's a case where they
> can all be same-host for more specialized use cases.

Or even less specialised;  the easiest setup is where you run an
attestation server that does all this on your site, and then put the
compute nodes in a cloud somewhere.

Dave

> Regards,
> Daniel
> -- 
> |: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org         -o-            https://fstop138.berrange.com :|
> |: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|
> 
-- 
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK




reply via email to

[Prev in Thread] Current Thread [Next in Thread]