qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH 2/3] cpus-common: Cache allocated work items


From: Richard Henderson
Subject: Re: [Qemu-devel] [RFC PATCH 2/3] cpus-common: Cache allocated work items
Date: Mon, 28 Aug 2017 10:47:28 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.2.1

On 08/27/2017 08:53 PM, Pranith Kumar wrote:
> Using heaptrack, I found that quite a few of our temporary allocations
> are coming from allocating work items. Instead of doing this
> continously, we can cache the allocated items and reuse them instead
> of freeing them.
> 
> This reduces the number of allocations by 25% (200000 -> 150000 for
> ARM64 boot+shutdown test).
> 
> Signed-off-by: Pranith Kumar <address@hidden>
> ---
>  cpus-common.c | 85 
> ++++++++++++++++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 70 insertions(+), 15 deletions(-)
> 
> diff --git a/cpus-common.c b/cpus-common.c
> index 59f751ecf9..a1c4c7d1a3 100644
> --- a/cpus-common.c
> +++ b/cpus-common.c
> @@ -24,6 +24,7 @@
>  #include "sysemu/cpus.h"
>  
>  static QemuMutex qemu_cpu_list_lock;
> +static QemuMutex qemu_wi_pool_lock;
>  static QemuCond exclusive_cond;
>  static QemuCond exclusive_resume;
>  static QemuCond qemu_work_cond;
> @@ -33,6 +34,58 @@ static QemuCond qemu_work_cond;
>   */
>  static int pending_cpus;
>  
> +typedef struct qemu_work_item {
> +    struct qemu_work_item *next;
> +    run_on_cpu_func func;
> +    run_on_cpu_data data;
> +    bool free, exclusive, done;
> +} qemu_work_item;
> +
> +typedef struct qemu_wi_pool {
> +    qemu_work_item *first, *last;
> +} qemu_wi_pool;
> +
> +qemu_wi_pool *wi_free_pool;
> +
> +static void qemu_init_workitem_pool(void)
> +{
> +    wi_free_pool = g_malloc0(sizeof(qemu_wi_pool));
> +    wi_free_pool->first = NULL;
> +    wi_free_pool->last  = NULL;
> +}
> +
> +static void qemu_wi_pool_insert(qemu_work_item *item)
> +{
> +    qemu_mutex_lock(&qemu_wi_pool_lock);
> +    if (wi_free_pool->last == NULL) {
> +        wi_free_pool->first = item;
> +        wi_free_pool->last = item;
> +    } else {
> +        wi_free_pool->last->next = item;
> +        wi_free_pool->last = item;
> +    }
> +    qemu_mutex_unlock(&qemu_wi_pool_lock);
> +}
> +
> +static qemu_work_item* qemu_wi_pool_remove(void)
> +{
> +    qemu_mutex_lock(&qemu_wi_pool_lock);
> +    qemu_work_item *ret = wi_free_pool->first;
> +
> +    if (ret == NULL)
> +        goto out;
> +
> +    wi_free_pool->first = ret->next;
> +    if (wi_free_pool->last == ret) {
> +        wi_free_pool->last = NULL;
> +    }

Why does this list need to record a "last" element?
It would seem a simple lifo would be sufficient.

(You would also be able to manage the list via cmpxchg without a separate lock,
but perhaps the difference between the two isn't measurable.)


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]