qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external


From: Dr. David Alan Gilbert
Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support with a QEMU-external TPM
Date: Thu, 21 Jan 2016 14:53:49 +0000
User-agent: Mutt/1.5.24 (2015-08-30)

* Stefan Berger (address@hidden) wrote:
> "Dr. David Alan Gilbert" <address@hidden> wrote on 01/21/2016 
> 06:40:35 AM:
> 
> > 
> > * Stefan Berger (address@hidden) wrote:
> > > Stefan Berger/Watson/IBM wrote on 01/20/2016 02:51:58 PM:
> > > 
> > > > "Daniel P. Berrange" <address@hidden> wrote on 01/20/2016 
> 10:42:09 
> > > AM:
> > > > 
> > > > > 
> > > > > On Wed, Jan 20, 2016 at 10:23:50AM -0500, Stefan Berger wrote:
> > > > > > "Daniel P. Berrange" <address@hidden> wrote on 01/20/2016 
> > > 09:58:39 
> > > > > > AM:
> > > > > > 
> > > > > > 
> > > > > > > Subject: Re: [Qemu-devel] [PATCH v5 0/4] Extend TPM support 
> with a 
> > > 
> > > > > > > QEMU-external TPM
> > > > > > > 
> > > > > > > On Mon, Jan 04, 2016 at 10:23:18AM -0500, Stefan Berger wrote:
> > > > > > > > The following series of patches extends TPM support with an
> > > > > > > > external TPM that offers a Linux CUSE (character device in 
> > > userspace)
> > > > > > > > interface. This TPM lets each VM access its own private 
> vTPM.
> > > > > > > 
> > > > > > > What is the backing store for this vTPM ? Are the vTPMs all
> > > > > > > multiplexed onto the host's physical TPM or is there something
> > > > > > > else going on ?
> > > > > > 
> > > > > > The vTPM writes its state into a plain file. In case the user 
> > > started the 
> > > > > > vTPM, the user gets to choose the directory. In case of libvirt, 
> 
> > > libvirt 
> > > > > > sets up the directory and starts the vTPM with the directory as 
> a 
> > > > > > parameter. The expectation for VMs (also containers) is that 
> each VM 
> > > can 
> > > > > > use the full set of TPM commands with the vTPM and due to how 
> the 
> > > TPM 
> > > > > > works, it cannot use the hardware TPM for that. SeaBIOS has 
> > > beenextended 
> > > > > > with TPM 1.2 support and initializes the vTPM in the same way it 
> 
> > > would 
> > > > > > initialize a hardware TPM.
> > > > > 
> > > > > So if its using a plain file, then when snapshotting VMs we have 
> to
> > > > > do full copies of the file and keep them all around in sync with
> > > > > the disk snapshots. By not having this functionality in QEMU we 
> don't
> > > > > immediately have a way to use qcow2 for the vTPM file backing 
> store
> > > > > to deal with snapshot management. The vTPM needs around 
> snapshotting
> > > > > feel fairly similar to the NVRAM needs, so it would be desiralbe 
> to
> > > > > have a ability to do a consistent thing for both.
> > > > 
> > > > The plain file serves as the current state of the TPM. In case of 
> > > > migration, suspend, snapshotting, the vTPM state blobs are retrieved
> > > > from the vTPM using ioctls and in case of a snapshot they are 
> > > > written into the QCoW2. Upon resume the state blobs are set in the 
> > > > vTPM. I is working as it is.
> > > 
> > > There is one issue in case of resume of a snapshot. If the permanent 
> state 
> > > of the TPM is modified during snapshotting, like ownership is taken of 
> the 
> > > TPM, the state, including the owner password, is written into the 
> plain 
> > > file. Then the VM is shut down. Once it is restarted (not a resume 
> from a 
> > > snapshot), the TPM's state will be relected by what was done during 
> the 
> > > run of that snapshot. So this is likely undesirable. Now the only way 
> > > around this seems to be that one needs to know the reason for why the 
> > > state blobs were pushed into the vTPM. In case of a snapshot, the 
> writing 
> > > of the permanent state into a file may need to be suppressed, while on 
> a 
> > > VM resume and resume from migration operation it needs to be written 
> into 
> > > the TPM's state file.
> > 
> > I don't understand that; are you saying that the ioctl's dont provide 
> all
> > the information that's included in the state file?
> 
> No. Running a snapshot does not change the state of the VM image unless 
> one takes another snapshot. The vTPM has to be behave the same way, 
> meaning that the state of the vTPM must not be overwritten while in a 
> snapshot. However, the vTPM needs to know that it's running a snapshot 
> whose state is 'volatile'.
> 
> Example: 
> 1) A VM is run and VM image is in state VM-A and vTPM is in state vTPM-A. 
> The VM is shut down and VM is in state VM-A and vTPM is in state vTPM-A.
> 
> 2) The VM runs a snapshot and VM image is in state VM-B and vTPM is in 
> state B. The user takes ownership of the vTPM, which puts the vTPM into 
> state vTPM-B2. VM is shut down and with that all VM image state is 
> discarded. Also the VTPM's state needs to be discarded.
> 
> 3) The VM is run again and the VM image is in state VM-A and the vTPM must 
> be in state vTPM-A from 1). However, at the moment the vTPM wold be in 
> state vTPM-B2 from the last run of the snapshot since the state was 
> written into the vTPM's state file.
> 
> The way around the problem in 3) stemming from 2) is writing the vTPM 
> state (which is kept in a file) into a differently named file while 
> running a snapshot. However, QEMU needs to tell the vTPM that it's running 
> a snapshot and the state is to be treated as volatile. A flag that conveys 
> 'you're running a snapshot' while setting the device state would be 
> enough. Though currently the function that triggers the setting of device 
> state doesn't get that in a flag. So there would have to be a function 
> like 'flag = qemu_doing_snapshot()' and pass that flag to the vTPM. Maybe 
> it already exists.

So I understand problem 3; but I don't think the solution works.
You don't know what the lifetime and the use of snapshots is going to be
when they're taken, or indeed when they start running.  You can have snapshots
taken off snapshots; you can migrate etc etc.

There are two ways I can see of solving this; but in both cases the state
has to live with the snapshot.  That means reverting to an earlier snapshot
reloads the vTPM from the vTPM state in a snapshot; one way is to
use the ioctl to grab all the state of the vTPM and save it in the snapshot
as migration data, and then when the snapshot resumes you take all the data
and stuff it back in the vTPM with another ioctl.  That's the full TPM state
(except maybe the RNG).   Unless the state is huge this should be pretty easy.

I think there's also a separate soltuion used for Flash memory contents 
(mostly on EFI VMs) called pflash; but I don't understand the snapshotting on
that; but it might be worth checking into - I seem to remeber it's based around
a small file so might be closer to your case.

Dave

>     Stefan
> 
> 
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]