qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] kvm: First step to push iothread lock out of in


From: Jan Kiszka
Subject: Re: [Qemu-devel] [PATCH] kvm: First step to push iothread lock out of inner run loop
Date: Sun, 24 Jun 2012 16:08:49 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686 (x86_64); de; rv:1.8.1.12) Gecko/20080226 SUSE/2.0.0.12-1.1 Thunderbird/2.0.0.12 Mnenhy/0.7.5.666

On 2012-06-24 15:34, liu ping fan wrote:
> On Sat, Jun 23, 2012 at 7:45 PM, Jan Kiszka <address@hidden> wrote:
>> On 2012-06-23 11:06, Marcelo Tosatti wrote:
>>> On Fri, Jun 22, 2012 at 09:22:59PM -0300, Marcelo Tosatti wrote:
>>>> On Sat, Jun 23, 2012 at 12:55:49AM +0200, Jan Kiszka wrote:
>>>>> Should have declared this [RFC] in the subject and CC'ed kvm...
>>>>>
>>>>> On 2012-06-23 00:45, Jan Kiszka wrote:
>>>>>> This sketches a possible path to get rid of the iothread lock on vmexits
>>>>>> in KVM mode. On x86, the the in-kernel irqchips has to be used because
>>>>>> we otherwise need to synchronize APIC and other per-cpu state accesses
>>>>>> that could be changed concurrently. Not yet fully analyzed is the NMI
>>>>>> injection path in the absence of an APIC.
>>>>>>
>>>>>> s390x should be fine without specific locking as their pre/post-run
>>>>>> callbacks are empty. Power requires locking for the pre-run callback.
>>>>>>
>>>>>> This patch is untested, but a similar version was successfully used in
>>>>>> a x86 setup with a network I/O path that needed no central iothread
>>>>>> locking anymore (required special MMIO exit handling).
>>>>>> ---
>>>>>>  kvm-all.c         |   18 ++++++++++++++++--
>>>>>>  target-i386/kvm.c |    7 +++++++
>>>>>>  target-ppc/kvm.c  |    4 ++++
>>>>>>  3 files changed, 27 insertions(+), 2 deletions(-)
>>>>>>
>>>>>> diff --git a/kvm-all.c b/kvm-all.c
>>>>>> index f8e4328..9c3e26f 100644
>>>>>> --- a/kvm-all.c
>>>>>> +++ b/kvm-all.c
>>>>>> @@ -1460,6 +1460,8 @@ int kvm_cpu_exec(CPUArchState *env)
>>>>>>          return EXCP_HLT;
>>>>>>      }
>>>>>>
>>>>>> +    qemu_mutex_unlock_iothread();
>>>>>> +
>>>>>>      do {
>>>>>>          if (env->kvm_vcpu_dirty) {
>>>>>>              kvm_arch_put_registers(env, KVM_PUT_RUNTIME_STATE);
>>>>>> @@ -1476,14 +1478,16 @@ int kvm_cpu_exec(CPUArchState *env)
>>>>>>               */
>>>>>>              qemu_cpu_kick_self();
>>>>>>          }
>>>>>> -        qemu_mutex_unlock_iothread();
>>>>>>
>>>>>>          run_ret = kvm_vcpu_ioctl(env, KVM_RUN, 0);
>>>>>>
>>>>>> -        qemu_mutex_lock_iothread();
>>>>>>          kvm_arch_post_run(env, run);
>>>>
>>>> target-i386/kvm.c
>>>>
>>>> void kvm_arch_post_run(CPUX86State *env, struct kvm_run *run)
>>>> {
>>>>     if (run->if_flag) {
>>>>         env->eflags |= IF_MASK;
>>>>     } else {
>>>>         env->eflags &= ~IF_MASK;
>>>>     }
>>>>     cpu_set_apic_tpr(env->apic_state, run->cr8);
>>>>     cpu_set_apic_base(env->apic_state, run->apic_base);
>>>> }
>>>>
>>>> Clearly there is no structure to any of the writes around the writes
>>>> in x86's kvm_arch_post_run, so it is unsafe.
>>>
>>> No access protocol to the CPUState and apic devices (who can write when,
>>> who can read when).
>>>
>>
>> Hmm, we may need the iothread lock around cpu_set_apic_tpr for
>> !kvm_irqchip_in_kernel(). And as we are at it, apic_base manipulation
>> can be but there as well.
>>
> As to !kvm_irqchip_in_kernel(),I think there is quite a few fields in
> CPUState (interrupt_request,eflags,halted,exit_request,exception_injected),
> which we must care about. They  are heavily depended on
> env->apic_state, so we consider them as a atomic context, that is say
> during the updating of env's these context, we do not allow apic_state
> changed.
> And that is why I introduce cpu_lock in the patch "[PATCH 2/2] kvm:
> use per-cpu lock to free vcpu thread out of the big lock"

This per-cpu lock is optimizing the slow path (!kvm_irqchip_in_kernel)
at the expense of the fast path. It's better to optimize for in-kernel
irqchip first. One day, when the key problems are solved, we may look at
userspace irqchip code path as well. Until then, BQL protection for
those paths is perfectly fine.

As a first step, I will post a series later that gets rid of
kvm_flush_coalesced_mmio_buffer in the common vmexit path.

Jan

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]