[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model

From: Jiri Denemark
Subject: Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model
Date: Thu, 26 May 2016 12:41:54 +0200
User-agent: Mutt/1.5.24 (2015-08-30)

On Wed, May 25, 2016 at 23:13:24 -0600, Chris Friesen wrote:
> Hi,
> If I don't specify a virtual CPU model, it appears to give me a "qemu64" CPU, 
> and /proc/cpuinfo in the guest instance looks something like this:
> processor 0
> vendor_id GenuineIntel
> cpu family 6
> model 6
> model name: QEMU Virtual CPU version 2.2.0
> stepping: 3
> microcode: 0x1
> flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush
> mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt 
> hypervisor lahf_lm abm vnmi ept
> However, if I explicitly specify a custom CPU model of "qemu64" the instance 
> refuses to boot and I get a log saying:
> libvirtError: unsupported configuration: guest and host CPU are not 
> compatible: 
> Host CPU does not provide required features: svmlibvirtError: unsupported 
> configuration: guest and host CPU are not compatible: Host CPU does not 
> provide 
> required features: svm

The qemu64 CPU model contains svm and thus libvirt will always consider
it incompatible with any Intel CPUs (which have vmx instead of svm). On
the other hand, QEMU by default ignores features that are missing in the
host CPU and has no problem using qemu64 CPU, the guest just won't see
some of the features defined in qemu64 model.

In your case, you should be able to use

    <cpu mode'custom' match='exact'>
        <feature name='svm' policy='disable'/>

to get the same CPU model you'd get by default (if not, you may need to
also add <feature name='vmx' policy='require'/>).


    <cpu mode'custom' match='exact'>
        <feature name='svm' policy='force'/>

should work too (and it would be better in case you use it on an AMD

But why you even want to use qemu64 CPU in a domain XML explicitly? If
you're fine with that CPU, just let QEMU use a default one. If not, use
a CPU model that fits your host/needs better.

BTW, using qemu64 with TCG (i.e., domain type='qemu' as oppose to
type='kvm') is fine because libvirt won't check it against host CPU and
QEMU will emulate all features so you'd get even the features that host
CPU does not support.


P.S. Kashyap is right, the issue he mentioned is not related at all to
your case.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]