qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 4/4] util/event-loop-base: Introduce options to set the th


From: Stefan Hajnoczi
Subject: Re: [PATCH v2 4/4] util/event-loop-base: Introduce options to set the thread pool size
Date: Mon, 14 Mar 2022 13:35:46 +0000

On Fri, Mar 11, 2022 at 11:40:30AM +0100, Nicolas Saenz Julienne wrote:
> On Thu, 2022-03-10 at 10:45 +0000, Stefan Hajnoczi wrote:
> > On Thu, Mar 03, 2022 at 04:13:07PM +0100, Nicolas Saenz Julienne wrote:
> > > @@ -537,10 +546,19 @@
> > >  #                 0 means that the engine will use its default
> > >  #                 (default:0, since 6.1)
> > >  #
> > > +# @thread-pool-min: minimum number of threads readily available in the 
> > > thread
> > > +#                   pool
> > > +#                   (default:0, since 6.2)
> > > +#
> > > +# @thread-pool-max: maximum number of threads the thread pool can contain
> > > +#                   (default:64, since 6.2)
> > 
> > Here and elsewhere:
> > s/6.2/7.1/
> 
> Yes, forgot to mention it was a placeholder, as I wasn't sure what version to
> target.
> 
> > > @@ -294,6 +314,36 @@ void thread_pool_submit(ThreadPool *pool, 
> > > ThreadPoolFunc *func, void *arg)
> > >      thread_pool_submit_aio(pool, func, arg, NULL, NULL);
> > >  }
> > >  
> > > +void thread_pool_update_params(ThreadPool *pool, AioContext *ctx)
> > > +{
> > > +    qemu_mutex_lock(&pool->lock);
> > > +
> > > +    pool->min_threads = ctx->thread_pool_min;
> > > +    pool->max_threads = ctx->thread_pool_max;
> > > +
> > > +    /*
> > > +     * We either have to:
> > > +     *  - Increase the number available of threads until over the 
> > > min_threads
> > > +     *    threshold.
> > > +     *  - Decrease the number of available threads until under the 
> > > max_threads
> > > +     *    threshold.
> > > +     *  - Do nothing. the current number of threads fall in between the 
> > > min and
> > > +     *    max thresholds. We'll let the pool manage itself.
> > > +     */
> > > +    for (int i = pool->cur_threads; i < pool->min_threads; i++) {
> > > +        spawn_thread(pool);
> > > +    }
> > > +
> > > +    while (pool->cur_threads > pool->max_threads) {
> > > +        qemu_sem_post(&pool->sem);
> > > +        qemu_mutex_unlock(&pool->lock);
> > > +        qemu_cond_wait(&pool->worker_stopped, &pool->lock);
> > > +        qemu_mutex_lock(&pool->lock);
> > 
> > Same question as Patch 1. This looks incorrect because qemu_cond_wait()
> > already drops pool->lock if it needs to block.
> 
> Yes, I'll fix that.
> 
> > Also, why wait? If worker threads are blocked for some reason then our
> > thread will block too.
> 
> Exiting thread_pool_update_params() before honoring the new constraints is a
> source of potential race conditions (having to worry for situations where
> cur_threads > max_threads), and on systems where determinism is important it's
> crucial to have a clear boundary between 'unsafe' and 'safe' states.

On the other hand it creates a reliability problem where a random worker
thread can block all of QEMU. Maybe it's better to let a blocked worker
thread terminate eventually than to hang QEMU?

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]