qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 2/2] s390x/ais: disable ais for compat machin


From: Christian Borntraeger
Subject: Re: [Qemu-devel] [PATCH v2 2/2] s390x/ais: disable ais for compat machines
Date: Wed, 27 Sep 2017 09:12:03 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0


On 09/26/2017 03:51 PM, David Hildenbrand wrote:
> On 26.09.2017 15:36, Christian Borntraeger wrote:
>> With newer kernels that do support the ais feature (4.13) a qemu 2.11
>> will not only enable the ais feature for the 2.11 machine, but also
>> for a <=2.10 compat machine. As this feature is not available in
>> QEMU <=2.9 (and QEMU 2.10.1), this guest will fail to migrate
>> back to an older qemu like 2.9 with:
>>
>> _snip_
>> error while loading state for instance 0x0 of device 's390-flic'
>> _snip_
>>
>> making the whole compat machine dis-functional. As a permanent fix, we
>> need to fence the ais feature for machines <= 2.10
>>
>> Due to ais being enabled on 2.10.0 (fixed in 2.10.1) this will prevent
>> migration of ais-enabled guests from 2.10.0 with
>>
>> _snip_
>> qemu-system-s390x: Failed to load s390-flic/ais:tmp
>> qemu-system-s390x: error while loading state for instance 0x0 of device 
>> 's390-flic'
>> qemu-system-s390x: load of migration failed: Function not implemented
>> _snip_
>>
>> Signed-off-by: Christian Borntraeger <address@hidden>
>> Cc: Yi Min Zhao <address@hidden>
>> Cc: Dr. David Alan Gilbert <address@hidden>
>> ---
>>  hw/intc/s390_flic_kvm.c            |  4 +++-
> 
> 
> As discussed, I think we should use cpu_model_allowed() instead.
I think I still prefer the explicit check for ais-enabled to make managedsave
(on the same system) continue to work if the user does not specify a cpu model 
at all (which will then fallback to the host model). We already fence other
things (like guarded storage) and yes it will grow over time but the 
*_allowed things seem to be the smallest maintenance issue in this area.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]