qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] s390x: remove direct reference to mem_path glob


From: David Hildenbrand
Subject: Re: [Qemu-devel] [PATCH] s390x: remove direct reference to mem_path global form s90x code
Date: Fri, 25 Jan 2019 10:27:23 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.1

On 25.01.19 10:23, Cornelia Huck wrote:
> On Thu, 24 Jan 2019 17:57:56 +0100
> Igor Mammedov <address@hidden> wrote:
> 
>> I plan to deprecate -mem-path option and replace it with memory-backend,
>> for that it's necessary to get rid of mem_path global variable.
>> Do it for s390x case, replacing it with alternative way to enable
>> 1Mb hugepages capability.
> 
> Getting rid of accessing mem_path directly sounds good.
> 
>>
>> Signed-off-by: Igor Mammedov <address@hidden>
>> ---
>> PS:
>> Original code nor the new one probably is not entirely correct when
>> huge pages are enabled in case where mixed initial RAM and memory
>> backends are used, backend's page size might not match initial RAM's
>> so I'm not sure if enabling 1MB cap is correct in this case on s390
>> (should it be the same for all RAM???).
>> With new approach 1Mb cap is not enabled if the smallest page size
>> is not 1Mb.
>> ---
>>  target/s390x/kvm.c | 37 ++++++++++++++++---------------------
>>  1 file changed, 16 insertions(+), 21 deletions(-)
>>
>> diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
>> index 2ebf26a..22e868a 100644
>> --- a/target/s390x/kvm.c
>> +++ b/target/s390x/kvm.c
>> @@ -285,33 +285,28 @@ void kvm_s390_crypto_reset(void)
>>      }
>>  }
>>  
>> -static int kvm_s390_configure_mempath_backing(KVMState *s)
>> +static int kvm_s390_configure_hugepage_backing(KVMState *s)
>>  {
>> -    size_t path_psize = qemu_mempath_getpagesize(mem_path);
>> +    size_t psize = qemu_getrampagesize();
>>  
>> -    if (path_psize == 4 * KiB) {
>> -        return 0;
>> -    }
>> -
>> -    if (!hpage_1m_allowed()) {
>> -        error_report("This QEMU machine does not support huge page "
>> -                     "mappings");
>> -        return -EINVAL;
>> -    }
>> +    if (psize == 1 * MiB) {
>> +        if (!hpage_1m_allowed()) {
>> +            error_report("This QEMU machine does not support huge page "
>> +                         "mappings");
>> +            return -EINVAL;
>> +        }
>>  
>> -    if (path_psize != 1 * MiB) {
>> +        if (kvm_vm_enable_cap(s, KVM_CAP_S390_HPAGE_1M, 0)) {
>> +            error_report("Memory backing with 1M pages was specified, "
>> +                         "but KVM does not support this memory backing");
>> +            return -EINVAL;
>> +        }
>> +        cap_hpage_1m = 1;
>> +    } else if (psize == 2 * GiB) {
>>          error_report("Memory backing with 2G pages was specified, "
>>                       "but KVM does not support this memory backing");
>>          return -EINVAL;
>>      }
>> -
>> -    if (kvm_vm_enable_cap(s, KVM_CAP_S390_HPAGE_1M, 0)) {
>> -        error_report("Memory backing with 1M pages was specified, "
>> -                     "but KVM does not support this memory backing");
>> -        return -EINVAL;
>> -    }
>> -
>> -    cap_hpage_1m = 1;
>>      return 0;
> 
> Just to compare, the old code did:
> - 4K pages -> all fine, do nothing
> - 1MB pages not allowed -> get out, regardless of the actual huge page
>   size
> - 1MB pages -> try to enable, if possible
> - all other sizes -> moan about 2G pages and get out
> 
> And the new code does:
> - 1M pages -> get out if 1MB not allowed, otherwise try to enable
> - 2G pages -> moan about 2G pages and get out
> - all other sizes -> all fine, do nothing
> 
> So, now the user will:
> - get a different error if they try to run with a 2G backing but
>   hpage_1m_allowed is off (which does not sound like a problem to me)
> - get the all-clear if they specified a hypothetical different page
>   size, while the code always complained about 2G pages before
> 
> Are there any chances at all that there may Yet Another Size? If not,
> this looks fine.

I think the next logical step is 1TB pages - unlikely for the next years ;)

-- 

Thanks,

David / dhildenb



reply via email to

[Prev in Thread] Current Thread [Next in Thread]