[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH v4 3/6] aio-win32: reorganize polling loop
From: |
Paolo Bonzini |
Subject: |
[Qemu-devel] [PATCH v4 3/6] aio-win32: reorganize polling loop |
Date: |
Tue, 21 Jul 2015 16:07:50 +0200 |
Preparatory bugfixes and tweaks to the loop before the next patch:
- disable dispatch optimization during aio_prepare. This fixes a bug.
- do not modify "blocking" until after the first WaitForMultipleObjects
call. This is needed in the next patch.
- change the loop to do...while. This makes it obvious that the loop
is always entered at least once. In the next patch this is important
because the first iteration undoes the ctx->notify_me increment that
happened before entering the loop.
Signed-off-by: Paolo Bonzini <address@hidden>
---
aio-win32.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/aio-win32.c b/aio-win32.c
index 233d8f5..9268b5c 100644
--- a/aio-win32.c
+++ b/aio-win32.c
@@ -284,11 +284,6 @@ bool aio_poll(AioContext *ctx, bool blocking)
int timeout;
aio_context_acquire(ctx);
- have_select_revents = aio_prepare(ctx);
- if (have_select_revents) {
- blocking = false;
- }
-
was_dispatching = ctx->dispatching;
progress = false;
@@ -304,6 +299,8 @@ bool aio_poll(AioContext *ctx, bool blocking)
*/
aio_set_dispatching(ctx, !blocking);
+ have_select_revents = aio_prepare(ctx);
+
ctx->walking_handlers++;
/* fill fd sets */
@@ -317,12 +314,18 @@ bool aio_poll(AioContext *ctx, bool blocking)
ctx->walking_handlers--;
first = true;
- /* wait until next event */
- while (count > 0) {
+ /* ctx->notifier is always registered. */
+ assert(count > 0);
+
+ /* Multiple iterations, all of them non-blocking except the first,
+ * may be necessary to process all pending events. After the first
+ * WaitForMultipleObjects call ctx->notify_me will be decremented.
+ */
+ do {
HANDLE event;
int ret;
- timeout = blocking
+ timeout = blocking && !have_select_revents
? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
if (timeout) {
aio_context_release(ctx);
@@ -351,7 +354,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
blocking = false;
progress |= aio_dispatch_handlers(ctx, event);
- }
+ } while (count > 0);
progress |= timerlistgroup_run_timers(&ctx->tlg);
--
2.4.3
- [Qemu-devel] [PATCH v4 0/6] AioContext: ctx->dispatching is dead, all hail ctx->notify_me, Paolo Bonzini, 2015/07/21
- [Qemu-devel] [PATCH v4 1/6] qemu-timer: initialize "timers_done_ev" to set, Paolo Bonzini, 2015/07/21
- [Qemu-devel] [PATCH v4 2/6] tests: remove irrelevant assertions from test-aio, Paolo Bonzini, 2015/07/21
- [Qemu-devel] [PATCH v4 3/6] aio-win32: reorganize polling loop,
Paolo Bonzini <=
- [Qemu-devel] [PATCH v4 6/6] AioContext: optimize clearing the EventNotifier, Paolo Bonzini, 2015/07/21
- [Qemu-devel] [PATCH v4 4/6] AioContext: fix broken ctx->dispatching optimization, Paolo Bonzini, 2015/07/21
- [Qemu-devel] [PATCH v4 5/6] AioContext: fix broken placement of event_notifier_test_and_clear, Paolo Bonzini, 2015/07/21
- Re: [Qemu-devel] [PATCH v4 0/6] AioContext: ctx->dispatching is dead, all hail ctx->notify_me, Fam Zheng, 2015/07/21
- Re: [Qemu-devel] [PATCH v4 0/6] AioContext: ctx->dispatching is dead, all hail ctx->notify_me, Richard W.M. Jones, 2015/07/22
- Re: [Qemu-devel] [PATCH v4 0/6] AioContext: ctx->dispatching is dead, all hail ctx->notify_me, Stefan Hajnoczi, 2015/07/22