[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 00/15] s390x: Protected Virtualization support

From: Daniel P . Berrangé
Subject: Re: [PATCH 00/15] s390x: Protected Virtualization support
Date: Fri, 29 Nov 2019 12:35:24 +0000
User-agent: Mutt/1.12.1 (2019-06-15)

On Fri, Nov 29, 2019 at 01:14:27PM +0100, Janosch Frank wrote:
> On 11/29/19 12:08 PM, Daniel P. Berrangé wrote:
> > On Wed, Nov 20, 2019 at 06:43:19AM -0500, Janosch Frank wrote:
> >> Most of the QEMU changes for PV are related to the new IPL type with
> >> subcodes 8 - 10 and the execution of the necessary Ultravisor calls to
> >> IPL secure guests. Note that we can only boot into secure mode from
> >> normal mode, i.e. stfle 161 is not active in secure mode.
> >>
> >> The other changes related to data gathering for emulation and
> >> disabling addressing checks in secure mode, as well as CPU resets.
> >>
> >> While working on this I sprinkled in some cleanups, as we sometimes
> >> significantly increase line count of some functions and they got
> >> unreadable.
> > 
> > Can you give some guidance on how management applications including
> > libvirt & layers above (oVirt, OpenStack, etc) would/should use this
> > feature ?  What new command line / monitor calls are needed, and
> > what feature restrictions are there on its use ?
> management applications generally do not need to know about this
> feature. Most of the magic is in the guest image, which boots up in a
> certain way to become a protected machine.
> The requirements for that to happen are:
> * Machine/firmware support
> * KVM & QEMU support
> * IO only with iommu
> * Guest needs to use IO bounce buffers
> * A kernel image or a kernel on a disk that was prepared with special
> tooling

If the user has a guest image that's expecting to run in protected
machine mode, presumably this will fail to boot if run on a host
which doesn't support this feature ?

As a mgmt app I think there will be a need to be able to determine
whether a host + QEMU combo is actually able to support protected
machines. If the mgmt app is given an image and the users says it
required protected mode, then the mgmt app needs to know which
host(s) are able to run it.

Doing version number checks is not particularly desirable, so is
there a way libvirt can determine if QEMU + host in general supports
protected machines, so that we can report this feature to mgmt apps ?

If a guest has booted & activated protected mode is there any way
for libvirt to query that status ? This would allow the mgmt app
to know that the guest is not going to be migratable thereafter.

Is there any way to prevent a guest from using protected mode even
if QEMU supports it ?  eg the mgmt app may want to be able to
guarantee that all VMs are migratable, so don't want a guest OS
secretly activating protected mode which blocks migration.

> Such VMs are started like any other VM and run a short "normal" stub
> that will prepare some things and then requests to be protected.
> Most of the restrictions are memory related and might be lifted in the
> future:
> * No paging
> * No migration

Presumably QEMU is going to set a migration blocker when a guest
activates protected mode ?

> * No huge page backings
> * No collaborative memory management

> There are no monitor changes or cmd additions currently.
> We're trying to insert protected VMs into the normal VM flow as much as
> possible. You can even do a memory dump without any segfault or
> protection exception for QEMU, however the guest's memory content will
> be unreadable because it's encrypted.

Is there any way to securely acquire a key needed to interpret this,
or is the memory dump completely useless ?

|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|

reply via email to

[Prev in Thread] Current Thread [Next in Thread]