qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] thread-pool: Use an EventNotifier to coordinate


From: Stefan Hajnoczi
Subject: Re: [Qemu-block] [PATCH] thread-pool: Use an EventNotifier to coordinate with AioContext
Date: Fri, 29 Mar 2019 10:48:44 +0000
User-agent: Mutt/1.11.3 (2019-02-01)

On Tue, Mar 26, 2019 at 12:44:11PM +0100, Sergio Lopez wrote:
> 
> Stefan Hajnoczi writes:
> 
> > On Mon, Mar 25, 2019 at 02:34:36PM +0100, Sergio Lopez wrote:
> >> Our current ThreadPool implementation lacks support for AioContext's
> >> event notifications. This not only means that it can't take advantage
> >> from the IOThread's adaptive polling, but also that the latter works
> >> against it, as it delays the execution of the bottom-halves
> >> completions.
> >
> > util/async.c:event_notifier_poll() participates in polling and should
> > detect that a bottom half was scheduled by a worker thread.
> >
> > Can you investigate, because I'm not sure this commit description has
> > identified the root cause of the performance improvement?
> 
> The problem with event_notifier_poll() is that ctx->notifier is
> explicitly ignored in run_poll_handlers_once(), so the BH notification
> doesn't count as making progress and run_poll_handlers() keeps running
> the polling loop:
> 
> 513 static bool run_poll_handlers_once(AioContext *ctx, int64_t *timeout)
> 514 {   
> 515     bool progress = false;
> 516     AioHandler *node;
> 517 
> 518     QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) {
> 519         if (!node->deleted && node->io_poll &&
> 520             aio_node_check(ctx, node->is_external) &&
> 521             node->io_poll(node->opaque)) {
> 522             *timeout = 0;
> 523             if (node->opaque != &ctx->notifier) {
> 524                 progress = true;
> 525             }
> 526         }
> 527 
> 528         /* Caller handles freeing deleted nodes.  Don't do it here. */
> 529     }
> 530 
> 531     return progress;
> 532 }
> 
> 547 static bool run_poll_handlers(AioContext *ctx, int64_t max_ns, int64_t 
> *timeout)
> 548 {
> 549     bool progress;
> 550     int64_t start_time, elapsed_time;
> 551 
> 552     assert(ctx->notify_me);
> 553     assert(qemu_lockcnt_count(&ctx->list_lock) > 0);
> 554 
> 555     trace_run_poll_handlers_begin(ctx, max_ns, *timeout);
> 556 
> 557     start_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME);
> 558     do {
> 559         progress = run_poll_handlers_once(ctx, timeout);
> 560         elapsed_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - 
> start_time;
> 561     } while (!progress && elapsed_time < max_ns
> 562              && !atomic_read(&ctx->poll_disable_cnt));
> (...)

For the record, Sergio, Paolo, and I discussed this more on IRC.  Sergio
is investigating the options here.

Stefan

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]