qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM


From: Marc-André Lureau
Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE TPM
Date: Wed, 01 Mar 2017 14:17:36 +0000

Hi

On Wed, Mar 1, 2017 at 5:26 PM Stefan Berger <address@hidden> wrote:

> "Daniel P. Berrange" <address@hidden> wrote on 03/01/2017 07:54:14
> AM:
>
> > From: "Daniel P. Berrange" <address@hidden>
> > To: Stefan Berger <address@hidden>
> > Cc: "Dr. David Alan Gilbert" <address@hidden>, Stefan Berger/
> > Watson/address@hidden, "address@hidden" <address@hidden>, "qemu-
> > address@hidden" <address@hidden>, "SERBAN, CRISTINA"
> > <address@hidden>, "Xu, Quan" <address@hidden>,
> > "address@hidden" <address@hidden>,
> > "address@hidden" <address@hidden>, "SHIH, CHING C"
> > <address@hidden>
> > Date: 03/01/2017 08:03 AM
> > Subject: Re: [Qemu-devel] [PATCH v5 1/4] Provide support for the CUSE
> TPM
> >
> > On Wed, Mar 01, 2017 at 07:25:28AM -0500, Stefan Berger wrote:
> > > On 06/16/2016 04:25 AM, Daniel P. Berrange wrote:
> > > > On Thu, Jun 16, 2016 at 09:05:20AM +0100, Dr. David Alan Gilbert
> wrote:
> > > > > * Stefan Berger (address@hidden) wrote:
> > > > > > On 06/15/2016 03:30 PM, Dr. David Alan Gilbert wrote:
> > > > > <snip>
> > > > >
> > > > > > > So what was the multi-instance vTPM proxy driver patch set
> about?
> > > > > > That's for containers.
> > > > > Why have the two mechanisms? Can you explain how the
> multi-instance
> > > > > proxy works; my brief reading when I saw your patch series seemed
> > > > > to suggest it could be used instead of CUSE for the non-container
> case.
> > > > One of the key things that was/is not appealing about this CUSE
> approach
> > > > is that it basically invents a new ioctl() mechanism for talking to
> > > > a TPM chardev. With in-kernel vTPM support, QEMU probably doesn't
> need
> > > > to have any changes at all - its existing driver for talking to TPM
> > >
> > > We still need the control channel with the vTPM to reset it upon VM
> reset,
> > > for getting and setting the state of the vTPM upon
> snapshot/suspend/resume,
> > > changing locality, etc.
> >
> > You ultimately need the same mechanisms if using in-kernel vTPM with
> > containers as containers can support snapshot/suspend/resume/etc too.
>
> The vTPM running on the backend side of the vTPM proxy driver is
> essentially the same as the CUSE TPM used for QEMU. I has the same control
> channel through sockets. So on that level we would have support for the
> operations but not integrated with anything that would support container
> migration.
>
>
Ah that might explain why you added the socket control channel, but there
is no user yet? (or some private product perhaps). Could you tell if
control and data channels need to be synchronized in any ways?

Getting back to the original out-of-process design: qemu links with many
libraries already, perhaps a less controversial approach would be to have a
linked in solution before proposing out-of-process? This would be easier to
deal with for management layers etc. This wouldn't be the most robust
solution, but could get us somewhere at least for easier testing and
development.

thanks


-- 
Marc-André Lureau


reply via email to

[Prev in Thread] Current Thread [Next in Thread]