qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/2] linux-aio: use LinuxAioState from the running thread


From: Kevin Wolf
Subject: Re: [PATCH 1/2] linux-aio: use LinuxAioState from the running thread
Date: Fri, 30 Sep 2022 17:32:11 +0200

Am 30.09.2022 um 12:00 hat Emanuele Giuseppe Esposito geschrieben:
> 
> 
> Am 29/09/2022 um 16:52 schrieb Kevin Wolf:
> > Am 09.06.2022 um 15:44 hat Emanuele Giuseppe Esposito geschrieben:
> >> From: Paolo Bonzini <pbonzini@redhat.com>
> >>
> >> Remove usage of aio_context_acquire by always submitting asynchronous
> >> AIO to the current thread's LinuxAioState.
> >>
> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> >> Signed-off-by: Emanuele Giuseppe Esposito <eesposit@redhat.com>
> >> ---
> >>  block/file-posix.c  |  3 ++-
> >>  block/linux-aio.c   | 13 ++++++-------
> >>  include/block/aio.h |  4 ----
> >>  3 files changed, 8 insertions(+), 12 deletions(-)
> >>
> >> diff --git a/block/file-posix.c b/block/file-posix.c
> >> index 48cd096624..33f92f004a 100644
> >> --- a/block/file-posix.c
> >> +++ b/block/file-posix.c
> >> @@ -2086,7 +2086,8 @@ static int coroutine_fn raw_co_prw(BlockDriverState 
> >> *bs, uint64_t offset,
> >>  #endif
> >>  #ifdef CONFIG_LINUX_AIO
> >>      } else if (s->use_linux_aio) {
> >> -        LinuxAioState *aio = aio_get_linux_aio(bdrv_get_aio_context(bs));
> >> +        AioContext *ctx = qemu_get_current_aio_context();
> >> +        LinuxAioState *aio = aio_get_linux_aio(ctx);
> >>          assert(qiov->size == bytes);
> >>          return laio_co_submit(bs, aio, s->fd, offset, qiov, type,
> >>                                s->aio_max_batch);
> > 
> > raw_aio_plug() and raw_aio_unplug() need the same change.
> > 
> > I wonder if we should actually better remove the 'aio' parameter from
> > the functions that linux-aio.c offers to avoid suggesting that any
> > LinuxAioState works for any thread. Getting it from the current
> > AioContext is something it can do by itself. But this would be code
> > cleanup for a separate patch.
> 
> I do not think that this would work. At least not for all functions of
> the API. I tried removing the ctx parameter from aio_setup_linux_aio and
> it's already problematic, as it used by raw_aio_attach_aio_context()
> which is a .bdrv_attach_aio_context() callback, which should be called
> by the main thread. So that function needs the aiocontext parameter.
> 
> So maybe for now just simplify aio_get_linux_aio()? In a separate patch.

Oh, I don't mind the ctx parameter in these functions at all.

I was talking about the functions in linux-aio.c, specifically
laio_co_submit(), laio_io_plug() and laio_io_unplug(). They could call
aio_get_linux_aio() internally for the current thread instead of letting
the caller do that and giving the false impression that there is more
than one correct value for their LinuxAioState parameter.

But anyway, as I said, this would be a separate cleanup patch. For this
one, it's just important that at least file-posix.c does the right thing
for plug/unplug, too.

> >> diff --git a/block/linux-aio.c b/block/linux-aio.c
> >> index 4c423fcccf..1d3cc767d1 100644
> >> --- a/block/linux-aio.c
> >> +++ b/block/linux-aio.c
> >> @@ -16,6 +16,9 @@
> >>  #include "qemu/coroutine.h"
> >>  #include "qapi/error.h"
> >>  
> >> +/* Only used for assertions.  */
> >> +#include "qemu/coroutine_int.h"
> >> +
> >>  #include <libaio.h>
> >>  
> >>  /*
> >> @@ -56,10 +59,8 @@ struct LinuxAioState {
> >>      io_context_t ctx;
> >>      EventNotifier e;
> >>  
> >> -    /* io queue for submit at batch.  Protected by AioContext lock. */
> >> +    /* All data is only used in one I/O thread.  */
> >>      LaioQueue io_q;
> >> -
> >> -    /* I/O completion processing.  Only runs in I/O thread.  */
> >>      QEMUBH *completion_bh;
> >>      int event_idx;
> >>      int event_max;
> >> @@ -102,9 +103,8 @@ static void qemu_laio_process_completion(struct 
> >> qemu_laiocb *laiocb)
> >>       * later.  Coroutines cannot be entered recursively so avoid doing
> >>       * that!
> >>       */
> >> -    if (!qemu_coroutine_entered(laiocb->co)) {
> >> -        aio_co_wake(laiocb->co);
> >> -    }
> >> +    assert(laiocb->co->ctx == laiocb->ctx->aio_context);
> >> +    qemu_coroutine_enter_if_inactive(laiocb->co);
> >>  }
> >>  
> >>  /**
> >> @@ -238,7 +238,6 @@ static void 
> >> qemu_laio_process_completions_and_submit(LinuxAioState *s)
> >>      if (!s->io_q.plugged && !QSIMPLEQ_EMPTY(&s->io_q.pending)) {
> >>          ioq_submit(s);
> >>      }
> >> -    aio_context_release(s->aio_context);
> >>  }
> > 
> > I certainly expected the aio_context_acquire() in the same function to
> > go away, too! Am I missing something?
> 
> ops

:-)

If it's unintentional, I'm actually surprised that locking without
unlocking later didn't cause problems immediately.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]