qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1] s390x: Reject unaligned RAM sizes


From: David Hildenbrand
Subject: Re: [PATCH v1] s390x: Reject unaligned RAM sizes
Date: Fri, 27 Mar 2020 17:01:18 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.6.0

On 27.03.20 16:34, Christian Borntraeger wrote:
> 
> 
> On 27.03.20 16:29, David Hildenbrand wrote:
>> Historically, we fixed up the RAM size (rounded it down), to fit into
>> storage increments. Since commit 3a12fc61af5c ("390x/s390-virtio-ccw: use
>> memdev for RAM"), we no longer consider the fixed-up size when
>> allcoating the RAM block - which will break migration.
>>
>> Let's simply drop that manual fixup code and let the user supply sane
>> RAM sizes. This will bail out early when trying to migrate (and make
>> an existing guest with e.g., 12345 MB non-migratable), but maybe we
>> should have rejected such RAM sizes right from the beginning.
>>
>> As we no longer fixup maxram_size as well, make other users use ram_size
>> instead. Keep using maxram_size when setting the maximum ram size in KVM,
>> as that will come in handy in the future when supporting memory hotplug
>> (in contrast, storage keys and storage attributes for hotplugged memory
>>  will have to be migrated per RAM block in the future).
>>
>> This fixes (or rather rejects early):
>>
>> 1. Migrating older QEMU to upstream QEMU (e.g., with "-m 1235M"), as the
>>    RAM block size changed.
> 
> Not sure I like this variant. Instead of breaking migration (that was 
> accidentially done by Igors changes) we now reject migration from older
> QEMUs to 5.0. This is not going to help those that still have such guests
> running and want to migrate. 

As Igor mentioned on another channel, you most probably can migrate an
older guest by starting it on the target with a fixed-up size.

E.g., migrate an old QEMU "-m 1235M" to a new QEMU "-m 1234M"

Not sure how many such weird-size VMs we actually do have in practice.


-- 
Thanks,

David / dhildenb




reply via email to

[Prev in Thread] Current Thread [Next in Thread]