What user and thus also user space wants depends on other factors:
1. reliability
2. performance
3. availability
It's not features, that's what programmers want.
That's why I have designed the model and migration capability around the
hardware
and not around the software features and don't allow them to be enabled
currently
together.
A software feature is a nice add on that is helpful for evaluation or
development
purpose. There is few space for it on productions systems.
One option that I currently see to make software implemented facility migration
capable is to calculate some kind of hash value derived from the full set of
active software facilities. That value can be compared with pre-calculated
values also stored in the supported model table of qemu. This value could be
seen like a virtual model extension that has to match like the model name.
But I have said it elsewhere already, a soft facility should be an exception and
not the rule.
So all we need is a list of "features the guest sees available" which is
the same as "features user space wants the guest to see" which then gets
masked through "features the host can do in hardware".
For emulation we can just check on the global feature availability on
whether we should emulate them or not.
Also, if user space wants to make sure that its feature list is actually
workable on the host kernel, it needs to set and get the features again
and then compare that with the ones it set? That's different from x86's
cpuid implementation but probably workable.
User space will probe what facilities are available and match them with the
predefined
cpu model set. Only those models which use a partial or full subset of the
hard/host
facility list are selectable.
Why?
If a host does not offer the features required for a model it is not able to
run efficiently.
Please take a look at how x86 does cpuid masking :).
In fact, I'm not 100% convinced that it's a good idea to link cpuid /
feature list exposure to the guest and actual feature implementation
inside the guest together. On POWER there is a patch set pending that
implements these two things separately - admittedly mostly because
hardware sucks and we can't change the PVR.
That is maybe the big difference with s390. The cpuid in the S390 case is not
directly comparable with the processor version register of POWER.
In the S390 world we have a well defined CPU model room spanned by the machine
type and its GA count. Thus we can define a bijective mapping between
(type, ga) <-> (cpuid, ibc, facility set). From type and ga we form the model
name which BTW is meaningful also for a human user.
Same thing as POWER.
By means of this name, a management interface (libvirt) will draw decisions if
migration to a remote hypervisor is a good idea or not. For that it just needs
to compare if the current model of the guest on the source hypervisor
("query-cpu-model"), is contained in the supported model list of the target
hypervisor ("query-cpu-definitions").
I don't think this works, since QEMU should always return all the cpu
definitions it's aware of on query-cpu-definitions, not just the ones
that it thinks may be compatible with the host at a random point in time.
It does not return model names that it thinks they are compatible at some point
in time. In s390 mode, it returns all definitions (CPU models) that a given host
system is capable to run. Together with the CPU model run by the guest, some
upper
management interface knows if the hypervisor supports the required CPU model and
uses a guest definition with the same CPU model on the target hypervisor.
The information for that is taken from the model table which QEMU builds up
during
startup time. This list limits the command line selectable CPU models as well.