[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 4/4] io: follow coroutine AioContext in qio_channel_yield(
From: |
Eric Blake |
Subject: |
Re: [PATCH v2 4/4] io: follow coroutine AioContext in qio_channel_yield() |
Date: |
Tue, 29 Aug 2023 14:32:46 -0500 |
User-agent: |
NeoMutt/20230517 |
On Tue, Aug 29, 2023 at 12:06:22PM -0400, Stefan Hajnoczi wrote:
> The ongoing QEMU multi-queue block layer effort makes it possible for multiple
> threads to process I/O in parallel. The nbd block driver is not compatible
> with
> the multi-queue block layer yet because QIOChannel cannot be used easily from
> coroutines running in multiple threads. This series changes the QIOChannel API
> to make that possible.
>
> In the current API, calling qio_channel_attach_aio_context() sets the
> AioContext where qio_channel_yield() installs an fd handler prior to yielding:
>
> qio_channel_attach_aio_context(ioc, my_ctx);
> ...
> qio_channel_yield(ioc); // my_ctx is used here
> ...
> qio_channel_detach_aio_context(ioc);
>
> This API design has limitations: reading and writing must be done in the same
> AioContext and moving between AioContexts involves a cumbersome sequence of
> API
> calls that is not suitable for doing on a per-request basis.
>
> There is no fundamental reason why a QIOChannel needs to run within the
> same AioContext every time qio_channel_yield() is called. QIOChannel
> only uses the AioContext while inside qio_channel_yield(). The rest of
> the time, QIOChannel is independent of any AioContext.
>
> In the new API, qio_channel_yield() queries the AioContext from the current
> coroutine using qemu_coroutine_get_aio_context(). There is no need to
> explicitly attach/detach AioContexts anymore and
> qio_channel_attach_aio_context() and qio_channel_detach_aio_context() are
> gone.
> One coroutine can read from the QIOChannel while another coroutine writes from
> a different AioContext.
>
> This API change allows the nbd block driver to use QIOChannel from any thread.
> It's important to keep in mind that the block driver already synchronizes
> QIOChannel access and ensures that two coroutines never read simultaneously or
> write simultaneously.
>
> This patch updates all users of qio_channel_attach_aio_context() to the
> new API. Most conversions are simple, but vhost-user-server requires a
> new qemu_coroutine_yield() call to quiesce the vu_client_trip()
> coroutine when not attached to any AioContext.
>
> While the API is has become simpler, there is one wart: QIOChannel has a
> special case for the iohandler AioContext (used for handlers that must not run
> in nested event loops). I didn't find an elegant way preserve that behavior,
> so
> I added a new API called qio_channel_set_follow_coroutine_ctx(ioc, true|false)
> for opting in to the new AioContext model. By default QIOChannel uses the
> iohandler AioHandler. Code that formerly called
> qio_channel_attach_aio_context() now calls
> qio_channel_set_follow_coroutine_ctx(ioc, true) once after the QIOChannel is
> created.
>
> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
> +++ b/include/io/channel-util.h
> @@ -49,4 +49,27 @@
> QIOChannel *qio_channel_new_fd(int fd,
> Error **errp);
>
> +/**
> + * qio_channel_yield:
Wrong function name...
> + * @read_fd: the file descriptor for the read handler
> + * @read_ctx: the AioContext for the read handler
> + * @io_read: the read handler
> + * @write_fd: the file descriptor for the write handler
> + * @write_ctx: the AioContext for the write handler
> + * @io_write: the write handler
> + * @opaque: the opaque argument to the read and write handler
> + *
> + * Set the read and write handlers when @read_ctx and @write_ctx are
> non-NULL,
> + * respectively. To leave a handler in its current state, pass a NULL
> + * AioContext. To clear a handler, pass a non-NULL AioContext and a NULL
> + * handler.
> + */
> +void qio_channel_util_set_aio_fd_handler(int read_fd,
...should be this.
> + AioContext *read_ctx,
> + IOHandler *io_read,
> + int write_fd,
> + AioContext *write_ctx,
> + IOHandler *io_write,
> + void *opaque);
> +
> #endif /* QIO_CHANNEL_UTIL_H */
> diff --git a/include/io/channel.h b/include/io/channel.h
> index 229bf36910..5f9dbaab65 100644
> --- a/include/io/channel.h
> +++ b/io/channel.c
>
> -static void qio_channel_set_aio_fd_handlers(QIOChannel *ioc)
> +static void coroutine_fn
> +qio_channel_set_fd_handlers(QIOChannel *ioc, GIOCondition condition)
> {
> - IOHandler *rd_handler = NULL, *wr_handler = NULL;
> + AioContext *ctx = ioc->follow_coroutine_ctx ?
> + qemu_coroutine_get_aio_context(qemu_coroutine_self()) :
> + iohandler_get_aio_context();
> + AioContext *read_ctx = NULL;
> + IOHandler *io_read = NULL;
> + AioContext *write_ctx = NULL;
> + IOHandler *io_write = NULL;
> +
> + if (condition == G_IO_IN) {
> + ioc->read_coroutine = qemu_coroutine_self();
> + ioc->read_ctx = ctx;
> + read_ctx = ctx;
> + io_read = qio_channel_restart_read;
> +
> + /*
> + * Thread safety: if the other coroutine is set and its AioContext
> + * match ours, then there is mutual exclusion between read and write
matches
> + * because they share a single thread and it's safe to set both read
> + * and write fd handlers here. If the AioContext does not match ours,
> + * then both threads may run in parallel but there is no shared state
> + * to worry about.
> + */
> + if (ioc->write_coroutine && ioc->write_ctx == ctx) {
> + write_ctx = ctx;
> + io_write = qio_channel_restart_write;
> + }
> + } else if (condition == G_IO_OUT) {
> + ioc->write_coroutine = qemu_coroutine_self();
> + ioc->write_ctx = ctx;
> + write_ctx = ctx;
> + io_write = qio_channel_restart_write;
> + if (ioc->read_coroutine && ioc->read_ctx == ctx) {
> + read_ctx = ctx;
> + io_read = qio_channel_restart_read;
> + }
> + } else {
> + abort();
> + }
> +
> + qio_channel_set_aio_fd_handler(ioc, read_ctx, io_read,
> + write_ctx, io_write, ioc);
> +}
> +
With those minor fixes,
Reviewed-by: Eric Blake <eblake@redhat.com>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc.
Virtualization: qemu.org | libguestfs.org