qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [qemu-s390x] [PATCH 1/1] s390x/sclp: fix maxram calcula


From: David Hildenbrand
Subject: Re: [Qemu-devel] [qemu-s390x] [PATCH 1/1] s390x/sclp: fix maxram calculation
Date: Mon, 30 Jul 2018 17:28:19 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.8.0

On 30.07.2018 17:20, Christian Borntraeger wrote:
> 
> 
> On 07/30/2018 05:17 PM, David Hildenbrand wrote:
>> On 30.07.2018 17:00, Christian Borntraeger wrote:
>>>
>>>
>>> On 07/30/2018 04:34 PM, David Hildenbrand wrote:
>>>> On 30.07.2018 16:09, Christian Borntraeger wrote:
>>>>> We clamp down ram_size to match the sclp increment size. We do
>>>>> not do the same for maxram_size, which means for large guests
>>>>> with some sizes (e.g. -m 50000) maxram_size differs from ram_size.
>>>>> This can break other code (e.g. CMMA migration) which uses maxram_size
>>>>> to calculate the number of pages and then throws some errors.
>>>>
>>>> So the only problem is that the buffer size between source and target
>>>> differ?
>>>
>>> The problem is that the target tries to access a non-existing buffer when 
>>> committing all cmma value, so the kernel returns with EFAULT.
>>>>
>>
>> Am I wrong or does CMMA migration code really not care about which parts
>> of maxram are actually used (== which memory regions are actually defined)?
>>
>> If so, this looks broken to me and the right fix is to use ramsize for
>> now, because it simply does not support maxram.
>>
>> (I assume using some -m X,maxmem=X+Y would make it fail in the same way)
>>
>> (this patch still makes sense and should be done)
> 
> I am looking for the minimal fix for 2.13 and ideally even for 2.12.1.
> 
> Can we agree on this fix and do the remaining thing later?
> 

Yes. The clean fix should then really only consider mapped memory
regions (so the sending side should somehow iterate over them and also
only access that memory).

Reviewed-by: David Hildenbrand <address@hidden>

-- 

Thanks,

David / dhildenb



reply via email to

[Prev in Thread] Current Thread [Next in Thread]