[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-block] [Qemu-devel] [PATCH for-4.0] aio-posix: ensure poll mod
From: |
Sergio Lopez |
Subject: |
Re: [Qemu-block] [Qemu-devel] [PATCH for-4.0] aio-posix: ensure poll mode is left when aio_notify is called |
Date: |
Wed, 27 Mar 2019 16:18:47 +0100 |
User-agent: |
mu4e 1.0; emacs 26.1 |
Paolo Bonzini writes:
> With aio=thread, adaptive polling makes latency worse rather than
> better, because it delays the execution of the ThreadPool's
> completion bottom half.
>
> event_notifier_poll() does run while polling, detecting that
> a bottom half was scheduled by a worker thread, but because
> ctx->notifier is explicitly ignored in run_poll_handlers_once(),
> scheduling the BH does not count as making progress and
> run_poll_handlers() keeps running. Fix this by recomputing
> the deadline after *timeout could have changed.
>
> With this change, ThreadPool still cannot participate in polling
> but at least it does not suffer from extra latency.
>
> Reported-by: Sergio Lopez <address@hidden>
> Cc: Stefan Hajnoczi <address@hidden>
> Cc: Kevin Wolf <address@hidden>
> Cc: address@hidden
> Signed-off-by: Paolo Bonzini <address@hidden>
> ---
> util/aio-posix.c | 9 +++++++--
> 1 file changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/util/aio-posix.c b/util/aio-posix.c
> index 6fbfa79..b166cda 100644
> --- a/util/aio-posix.c
> +++ b/util/aio-posix.c
> @@ -519,6 +519,10 @@ static bool run_poll_handlers_once(AioContext *ctx,
> int64_t *timeout)
> if (!node->deleted && node->io_poll &&
> aio_node_check(ctx, node->is_external) &&
> node->io_poll(node->opaque)) {
> + /*
> + * Polling was successful, exit try_poll_mode immediately
> + * to adjust the next polling time.
> + */
> *timeout = 0;
> if (node->opaque != &ctx->notifier) {
> progress = true;
> @@ -558,8 +562,9 @@ static bool run_poll_handlers(AioContext *ctx, int64_t
> max_ns, int64_t *timeout)
> do {
> progress = run_poll_handlers_once(ctx, timeout);
> elapsed_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start_time;
> - } while (!progress && elapsed_time < max_ns
> - && !atomic_read(&ctx->poll_disable_cnt));
> + max_ns = MIN(*timeout, max_ns);
While testing this patch I've noticed we also need to deal with "timeout"
being "-1" when run_poll_handlers() is called, or we'll be polling just
once in this situation.
As we're using "timeout" here as both a way to break the loop at
run_poll_handlers() and to avoid calling ppoll() if the poll was
successful, I think we could do something like this:
===========================
diff --git a/util/aio-posix.c b/util/aio-posix.c
index 6fbfa7924f..a9081add67 100644
--- a/util/aio-posix.c
+++ b/util/aio-posix.c
@@ -558,7 +558,7 @@ static bool run_poll_handlers(AioContext *ctx, int64_t
max_ns, int64_t *timeout)
do {
progress = run_poll_handlers_once(ctx, timeout);
elapsed_time = qemu_clock_get_ns(QEMU_CLOCK_REALTIME) - start_time;
- } while (!progress && elapsed_time < max_ns
+ } while (*timeout != 0 && elapsed_time < max_ns
&& !atomic_read(&ctx->poll_disable_cnt));
/* If time has passed with no successful polling, adjust *timeout to
===========================
> + assert(!(max_ns && progress));
> + } while (elapsed_time < max_ns && !atomic_read(&ctx->poll_disable_cnt));
>
> /* If time has passed with no successful polling, adjust *timeout to
> * keep the same ending time.
signature.asc
Description: PGP signature