qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] target-i386: Disable CPUID_EXT_MONITOR when KVM


From: Bandan Das
Subject: Re: [Qemu-devel] [PATCH] target-i386: Disable CPUID_EXT_MONITOR when KVM is enabled
Date: Tue, 28 May 2013 12:34:53 -0400
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.2 (gnu/linux)

Eduardo Habkost <address@hidden> writes:

> On Mon, May 27, 2013 at 02:21:36PM +0200, Paolo Bonzini wrote:
>> Il 27/05/2013 14:09, Eduardo Habkost ha scritto:
>> > On Sat, May 25, 2013 at 08:25:49AM +0200, Paolo Bonzini wrote:
>> >> Il 25/05/2013 03:21, Bandan Das ha scritto:
>> >>> There is one user-visible effect: "-cpu ...,enforce" will stop failing
>> >>> because of missing KVM support for CPUID_EXT_MONITOR. But that's exactly
>> >>> the point: there's no point in having CPU model definitions that would
>> >>> never work as-is with neither TCG or KVM. This patch is changing the
>> >>> meaning of (e.g.) "-machine ...,accel=kvm -cpu Opteron_G3" to match what
>> >>> was already happening in practice.
>> >>
>> >> But then -cpu Opteron_G3 does not match a "real" Opteron G3.  Is it
>> >> worth it?
>> > 
>> > No models match a "real" CPU this way, because neither TCG or KVM
>> > support all features supported by a real CPU. I ask the opposite
>> > question: is it worth maintaining an "accurate" CPU model definition
>> > that would never work without feature-bit tweaking in the command-line?
>> 
>> It would work with TCG.  Changing TCG to KVM should not change hardware
>> if you use "-cpu ...,enforce", so it is right that it fails when
>> starting with KVM.
>> 
>
> Changing between KVM and TCG _does_ change hardware, today (with or
> without check/enforce). All CPU models on TCG have features not
> supported by TCG automatically removed. See the "if (!kvm_enabled())"
> block at x86_cpu_realizefn().

Yes, this is exactly why I was inclined to remove the monitor flag. 
We already have uses of kvm_enabled() to set (or remove) kvm specific stuff,
and this change is no different. I can see Paolo's point though, having 
a common definition probably makes sense too.


> (That's why I argue that we need separate classes/names for TCG and KVM
> modes. Otherwise our predefined models get less useful as they will
> require low-level feature-bit fiddling on the libvirt side to make them
> work as expected.)

Agreed. From a user's perspective, I think the more a CPU model "just works",
whether it's KVM or TCG, the better.

Bandan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]