qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 10/12] cpus-common: Introduce async_safe_run_on_


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH 10/12] cpus-common: Introduce async_safe_run_on_cpu()
Date: Mon, 5 Sep 2016 17:14:48 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.2.0


On 05/09/2016 17:08, Alex Bennée wrote:
> 
> Paolo Bonzini <address@hidden> writes:
> 
>> Signed-off-by: Paolo Bonzini <address@hidden>
>> ---
>>  cpus-common.c     | 25 +++++++++++++++++++++++++
>>  include/qom/cpu.h | 12 ++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/cpus-common.c b/cpus-common.c
>> index 59c8dc8..88cf5ec 100644
>> --- a/cpus-common.c
>> +++ b/cpus-common.c
>> @@ -144,6 +144,11 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func 
>> func, void *data)
>>      queue_work_on_cpu(cpu, wi);
>>  }
>>
>> +typedef struct SafeWorkItem {
>> +    run_on_cpu_func func;
>> +    void *data;
>> +} SafeWorkItem;
>> +
>>  /* Wait for pending exclusive operations to complete.  The exclusive lock
>>     must be held.  */
>>  static inline void exclusive_idle(void)
>> @@ -208,6 +213,26 @@ void cpu_exec_end(CPUState *cpu)
>>      qemu_mutex_unlock(&qemu_cpu_list_mutex);
>>  }
>>
>> +static void async_safe_run_on_cpu_fn(CPUState *cpu, void *data)
>> +{
>> +    SafeWorkItem *w = data;
>> +
>> +    start_exclusive();
>> +    w->func(cpu, w->data);
>> +    end_exclusive();
>> +    g_free(w);
>> +}
>> +
>> +void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
>> +{
>> +    SafeWorkItem *w = g_new(SafeWorkItem, 1);
> 
> OK so I appreciate this approach is a neat way to embed safe work in the
> existing queue but does it really offer that much more for yet another
> dynamic allocation vs an extra flag to the WorkItem?

Actually I had it that way in the "first version" (the one that I
promised last Monday).  The questions are:

1) is it so different to have 1 vs. 2 mini allocations

2) is this a fast path

but it's no big deal, I can certainly change back.

Paolo

> In previous iterations I can DoS QEMU with a guest that does heavy
> cross-CPU TLB flushing which led to a storm of mini allocations (for the
> list and associated structures). This caused the massive memory usage as
> the queue backed up.
> 
> I appreciate it was a fairly special test case and I introduced other
> mitigations in the base patches cputlb code to get around it however it
> was the driver for me experimenting with the pre-allocated array for
> holding work items.
> 
>> +
>> +    w->func = func;
>> +    w->data = data;
>> +
>> +    async_run_on_cpu(cpu, async_safe_run_on_cpu_fn, w);
>> +}
>> +
>>  void process_queued_cpu_work(CPUState *cpu)
>>  {
>>      struct qemu_work_item *wi;
>> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
>> index 0e04e8f..54a875e 100644
>> --- a/include/qom/cpu.h
>> +++ b/include/qom/cpu.h
>> @@ -663,6 +663,18 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, 
>> void *data);
>>  void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data);
>>
>>  /**
>> + * async_safe_run_on_cpu:
>> + * @cpu: The vCPU to run on.
>> + * @func: The function to be executed.
>> + * @data: Data to pass to the function.
>> + *
>> + * Schedules the function @func for execution on the vCPU @cpu 
>> asynchronously,
>> + * while all other vCPUs are sleeping.  @func is called with the CPU list 
>> lock
>> + * taken (and for system emulation the BQL); any other lock can be taken 
>> safely.
>> + */
>> +void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data);
>> +
>> +/**
>>   * qemu_get_cpu:
>>   * @index: The address@hidden value of the CPU to obtain.
>>   *
> 
> 
> --
> Alex Bennée
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]