qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 0/7] x86: Rework KVM-defaults compat code, enabl


From: Waiman Long
Subject: Re: [Qemu-devel] [PATCH 0/7] x86: Rework KVM-defaults compat code, enable kvm_pv_unhalt by default
Date: Wed, 11 Oct 2017 16:19:38 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.0

On 10/10/2017 03:41 PM, Eduardo Habkost wrote:
> On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote:
>> On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
>>>> Yes.  Another possibility is to enable it when there is >1 NUMA node in
>>>> the guest.  We generally don't do this kind of magic but higher layers
>>>> (oVirt/OpenStack) do.
>>> Can't the guest make this decision, instead of the host?
>> By guest, do you mean the guest OS itself or the admin of the guest VM?
> It could be either.  But even if action is required from the
> guest admin to get better performance in some cases, I'd argue
> that the default behavior of a Linux guest shouldn't cause a
> performance regression if the host stops hiding a feature in
> CPUID.
>
>> I am thinking about maybe adding kernel boot command line option like
>> "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use
>> unfair spinlock if the number of CPUs is 4 or less, for example. The
>> default value of 0 will have the same behavior as it is today. Please
>> let me know what you guys think about that.
> If that's implemented, can't Linux choose a reasonable default
> for unfair_pvspinlock_cpu_threshold that won't require the admin
> to manually configure it on most cases?

It is hard to have a fixed value as it depends on the CPUs being used as
well as the kind of workloads that are being run. Besides, using unfair
locks have the undesirable side effect of being subject to lock
starvation under certain circumstances. So we may not work it to be
turned on by default. Customers have to take their own risk if they want
that.

Regards,
Longman




reply via email to

[Prev in Thread] Current Thread [Next in Thread]