qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext duri


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH 2/3] AioContext: acquire/release AioContext during aio_poll
Date: Thu, 26 Feb 2015 14:21:14 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0


On 25/02/2015 06:45, Fam Zheng wrote:
> On Fri, 02/20 17:26, Paolo Bonzini wrote:
>> This is the first step in pushing down acquire/release, and will let
>> rfifolock drop the contention callback feature.
>>
>> Signed-off-by: Paolo Bonzini <address@hidden>
>> ---
>>  aio-posix.c         |  9 +++++++++
>>  aio-win32.c         |  8 ++++++++
>>  include/block/aio.h | 15 ++++++++-------
>>  3 files changed, 25 insertions(+), 7 deletions(-)
>>
>> diff --git a/aio-posix.c b/aio-posix.c
>> index 4a30b77..292ae84 100644
>> --- a/aio-posix.c
>> +++ b/aio-posix.c
>> @@ -238,6 +238,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>      bool progress;
>>      int64_t timeout;
>>  
>> +    aio_context_acquire(ctx);
>>      was_dispatching = ctx->dispatching;
>>      progress = false;
>>  
>> @@ -267,7 +268,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>      timeout = blocking ? aio_compute_timeout(ctx) : 0;
>>  
>>      /* wait until next event */
>> +    if (timeout) {
>> +        aio_context_release(ctx);
>> +    }
>>      ret = qemu_poll_ns((GPollFD *)pollfds, npfd, timeout);
>> +    if (timeout) {
>> +        aio_context_acquire(ctx);
>> +    }
>>  
>>      /* if we have any readable fds, dispatch event */
>>      if (ret > 0) {
>> @@ -285,5 +292,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>      }
>>  
>>      aio_set_dispatching(ctx, was_dispatching);
>> +    aio_context_release(ctx);
>> +
>>      return progress;
>>  }
>> diff --git a/aio-win32.c b/aio-win32.c
>> index e6f4ced..233d8f5 100644
>> --- a/aio-win32.c
>> +++ b/aio-win32.c
>> @@ -283,6 +283,7 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>      int count;
>>      int timeout;
>>  
>> +    aio_context_acquire(ctx);
>>      have_select_revents = aio_prepare(ctx);
>>      if (have_select_revents) {
>>          blocking = false;
>> @@ -323,7 +324,13 @@ bool aio_poll(AioContext *ctx, bool blocking)
>>  
>>          timeout = blocking
>>              ? qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)) : 0;
>> +        if (timeout) {
>> +            aio_context_release(ctx);
> 
> Why are the unlock/lock pairs around poll conditional?

Both iothread.c and os_host_main_loop_wait are doing it, IIRC it was
measurably faster.

In particular, iothread.c was completely avoiding acquire/release around
non-blocking aio_poll and this patch does not have exactly the same behavior,
but it is trying to be close:

-        aio_context_acquire(iothread->ctx);
-        blocking = true;
-        while (!iothread->stopping && aio_poll(iothread->ctx, blocking)) {
-            /* Progress was made, keep going */
-            blocking = false;
-        }
-        aio_context_release(iothread->ctx);

The exact same behavior can be done easily directly in aio_poll, for this
RFC I'm keeping the code a little simpler.

Paolo

> 
> Fam
> 
>> +        }
>>          ret = WaitForMultipleObjects(count, events, FALSE, timeout);
>> +        if (timeout) {
>> +            aio_context_acquire(ctx);
>> +        }
>>          aio_set_dispatching(ctx, true);
>>  
>>          if (first && aio_bh_poll(ctx)) {



reply via email to

[Prev in Thread] Current Thread [Next in Thread]