qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/4] io: follow coroutine AioContext in qio_channel_yield(


From: Eric Blake
Subject: Re: [PATCH v3 4/4] io: follow coroutine AioContext in qio_channel_yield()
Date: Thu, 7 Sep 2023 15:41:08 -0500
User-agent: NeoMutt/20230517

On Wed, Aug 30, 2023 at 06:48:02PM -0400, Stefan Hajnoczi wrote:
> The ongoing QEMU multi-queue block layer effort makes it possible for multiple
> threads to process I/O in parallel. The nbd block driver is not compatible 
> with
> the multi-queue block layer yet because QIOChannel cannot be used easily from
> coroutines running in multiple threads. This series changes the QIOChannel API
> to make that possible.
> 
...
> 
> This API change allows the nbd block driver to use QIOChannel from any thread.
> It's important to keep in mind that the block driver already synchronizes
> QIOChannel access and ensures that two coroutines never read simultaneously or
> write simultaneously.
> 
> This patch updates all users of qio_channel_attach_aio_context() to the
> new API. Most conversions are simple, but vhost-user-server requires a
> new qemu_coroutine_yield() call to quiesce the vu_client_trip()
> coroutine when not attached to any AioContext.
> 
> While the API is has become simpler, there is one wart: QIOChannel has a
> special case for the iohandler AioContext (used for handlers that must not run
> in nested event loops). I didn't find an elegant way preserve that behavior, 
> so
> I added a new API called qio_channel_set_follow_coroutine_ctx(ioc, true|false)
> for opting in to the new AioContext model. By default QIOChannel uses the
> iohandler AioHandler. Code that formerly called
> qio_channel_attach_aio_context() now calls
> qio_channel_set_follow_coroutine_ctx(ioc, true) once after the QIOChannel is
> created.
> 
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> Acked-by: Daniel P. Berrangé <berrange@redhat.com>
> ---
>  include/io/channel-util.h        |  23 ++++++
>  include/io/channel.h             |  69 ++++++++----------
>  include/qemu/vhost-user-server.h |   1 +
>  block/nbd.c                      |  11 +--
>  io/channel-command.c             |  10 ++-
>  io/channel-file.c                |   9 ++-
>  io/channel-null.c                |   3 +-
>  io/channel-socket.c              |   9 ++-
>  io/channel-tls.c                 |   6 +-
>  io/channel-util.c                |  24 +++++++
>  io/channel.c                     | 120 ++++++++++++++++++++++---------
>  migration/channel-block.c        |   3 +-
>  nbd/server.c                     |  14 +---
>  scsi/qemu-pr-helper.c            |   4 +-
>  util/vhost-user-server.c         |  27 +++++--
>  15 files changed, 216 insertions(+), 117 deletions(-)

Looks like migration/rdma.c is also impacted:

../migration/rdma.c: In function ‘qio_channel_rdma_class_init’:
../migration/rdma.c:4037:38: error: assignment to ‘void (*)(QIOChannel *, 
AioContext *, void (*)(void *), AioContext *, void (*)(void *), void *)’ from 
incompatible pointer type ‘void (*)(QIOChannel *, AioContext *, void (*)(void 
*), void (*)(void *), void *)’ [-Werror=incompatible-pointer-types]
 4037 |     ioc_klass->io_set_aio_fd_handler = 
qio_channel_rdma_set_aio_fd_handler;
      |                                      ^

I'm squashing this in:

diff --git i/migration/rdma.c w/migration/rdma.c
index ca430d319d9..a2a3db35b1d 100644
--- i/migration/rdma.c
+++ w/migration/rdma.c
@@ -3103,22 +3103,23 @@ static GSource 
*qio_channel_rdma_create_watch(QIOChannel *ioc,
 }

 static void qio_channel_rdma_set_aio_fd_handler(QIOChannel *ioc,
-                                                  AioContext *ctx,
-                                                  IOHandler *io_read,
-                                                  IOHandler *io_write,
-                                                  void *opaque)
+                                                AioContext *read_ctx,
+                                                IOHandler *io_read,
+                                                AioContext *write_ctx,
+                                                IOHandler *io_write,
+                                                void *opaque)
 {
     QIOChannelRDMA *rioc = QIO_CHANNEL_RDMA(ioc);
     if (io_read) {
-        aio_set_fd_handler(ctx, rioc->rdmain->recv_comp_channel->fd, io_read,
-                           io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmain->send_comp_channel->fd, io_read,
-                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(read_ctx, rioc->rdmain->recv_comp_channel->fd,
+                           io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(read_ctx, rioc->rdmain->send_comp_channel->fd,
+                           io_read, io_write, NULL, NULL, opaque);
     } else {
-        aio_set_fd_handler(ctx, rioc->rdmaout->recv_comp_channel->fd, io_read,
-                           io_write, NULL, NULL, opaque);
-        aio_set_fd_handler(ctx, rioc->rdmaout->send_comp_channel->fd, io_read,
-                           io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(write_ctx, rioc->rdmaout->recv_comp_channel->fd,
+                           io_read, io_write, NULL, NULL, opaque);
+        aio_set_fd_handler(write_ctx, rioc->rdmaout->send_comp_channel->fd,
+                           io_read, io_write, NULL, NULL, opaque);
     }
 }



-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization:  qemu.org | libguestfs.org




reply via email to

[Prev in Thread] Current Thread [Next in Thread]