qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/3] util/event-loop: Introduce options to set the thread poo


From: Nicolas Saenz Julienne
Subject: Re: [PATCH 3/3] util/event-loop: Introduce options to set the thread pool size
Date: Mon, 28 Feb 2022 20:20:33 +0100
User-agent: Evolution 3.42.4 (3.42.4-1.fc35)

On Thu, 2022-02-24 at 10:40 +0000, Stefan Hajnoczi wrote:
> On Mon, Feb 21, 2022 at 06:08:45PM +0100, Nicolas Saenz Julienne wrote:
> > The thread pool regulates itself: when idle, it kills threads until
> > empty, when in demand, it creates new threads until full. This behaviour
> > doesn't play well with latency sensitive workloads where the price of
> > creating a new thread is too high. For example, when paired with qemu's
> > '-mlock', or using safety features like SafeStack, creating a new thread
> > has been measured take multiple milliseconds.
> > 
> > In order to mitigate this let's introduce a new 'EventLoopBackend'
> > property to set the thread pool size. The threads will be created during
> > the pool's initialization, remain available during its lifetime
> > regardless of demand, and destroyed upon freeing it. A properly
> > characterized workload will then be able to configure the pool to avoid
> > any latency spike.
> > 
> > Signed-off-by: Nicolas Saenz Julienne <nsaenzju@redhat.com>
> > ---
> >  include/block/aio.h | 11 +++++++++++
> >  qapi/qom.json       |  4 +++-
> >  util/async.c        |  3 +++
> >  util/event-loop.c   | 15 ++++++++++++++-
> >  util/event-loop.h   |  4 ++++
> >  util/main-loop.c    | 13 +++++++++++++
> >  util/thread-pool.c  | 41 +++++++++++++++++++++++++++++++++++++----
> >  7 files changed, 85 insertions(+), 6 deletions(-)
> > 
> > diff --git a/include/block/aio.h b/include/block/aio.h
> > index 5634173b12..331483d1d1 100644
> > --- a/include/block/aio.h
> > +++ b/include/block/aio.h
> > @@ -192,6 +192,8 @@ struct AioContext {
> >      QSLIST_HEAD(, Coroutine) scheduled_coroutines;
> >      QEMUBH *co_schedule_bh;
> >  
> > +    int pool_min;
> > +    int pool_max;
> 
> Are these fields protected by ThreadPool->lock? Please document. This is
> a clue that maybe these fields belong in ThreadPool.

Yes they are. I'll document it properly.

> Regarding the field names: the AioContext thread pool field is called
> thread_pool and the user-visible parameters are thread-pool-min/max. I
> suggest calling the fields thread_pool_min/max too so it's clear which
> pool we're talking about and there is a correspondence to user-visible
> parameters.

Noted.

> > @@ -350,3 +358,28 @@ void thread_pool_free(ThreadPool *pool)
> >      qemu_mutex_destroy(&pool->lock);
> >      g_free(pool);
> >  }
> > +
> > +void aio_context_set_thread_pool_params(AioContext *ctx, uint64_t min,
> > +                                        uint64_t max, Error **errp)
> > +{
> > +    ThreadPool *pool = ctx->thread_pool;
> > +
> > +    if (min > max || !max) {
> 
> ctx->pool_min/max are int while the min/max arguments are uint64_t.
> Please add an INT_MAX check to detect overflow.

Noted.

> > +        error_setg(errp, "bad thread-pool-min/thread-pool-max values");
> > +        return;
> > +    }
> > +
> > +    if (pool) {
> > +        qemu_mutex_lock(&pool->lock);
> > +    }
> 
> This code belongs in util/thread-pool.c. I guess the reason for keeping
> the fields in AioContext instead of ThreadPool is because the ThreadPool
> is created on demand and we'd have nowhere to store the parameter value.

Indeed.

> I suggest we bite the bullet and keep an extra copy of the variables in
> AioContext with a clean ThreadPool interface (thread_pool_set_params())
> instead of letting AioContext and ThreadPool access each other's
> internals.

OK!

> > +
> > +    ctx->pool_min = min;
> > +    ctx->pool_max = max;
> > +
> > +    if (pool) {
> > +        for (int i = pool->cur_threads; i < ctx->pool_min; i++) {
> > +            spawn_thread(pool);
> > +        }
> 
> What about the reverse: when min is lowered and there are a bunch of
> idle worker threads we could wake them up so they terminate until
> ->pool_min is reached again?

Makes sense, I'll look into it.

-- 
Nicolás Sáenz




reply via email to

[Prev in Thread] Current Thread [Next in Thread]