qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [Qemu-block] [PATCH v2 00/11] aio: Introduce handler ty


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [Qemu-block] [PATCH v2 00/11] aio: Introduce handler type to fix nested aio_poll for dataplane
Date: Fri, 28 Aug 2015 12:53:20 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

On Wed, Jul 29, 2015 at 12:42:03PM +0800, Fam Zheng wrote:
> v2: Switch to disable/enable model. [Paolo]
> 
> Most existing nested aio_poll()'s in block layer are inconsiderate of
> dispatching potential new r/w requests from ioeventfds and nbd exports, which
> might result in responsiveness issues (e.g. bdrv_drain_all will not return 
> when
> new requests keep coming), or even wrong semantics (e.g. qmp_transaction 
> cannot
> enforce atomicity due to aio_poll in bdrv_drain_all).
> 
> Previous attampts to address this issue include new op blocker[1], 
> bdrv_lock[2]
> and nested AioContext (patches not posted to qemu-devel).
> 
> This approach is based on the idea proposed by Paolo Bonzini. The original 
> idea
> is introducing "aio_context_disable_client / aio_context_enable_client to
> filter AioContext handlers according to the "client", e.g.
> AIO_CLIENT_DATAPLANE (ioeventfd), AIO_CLIENT_PROTOCOL, AIO_CLIENT_NBD_SERVER,
> AIO_CLIENT_CONTEXT, ... Extend aio_set_{event_notifier,fd}_handler to pass a
> client (type)." 
> 
> After this series, block layer aio_poll() will only process those "protocol"
> fds that are used in block I/O, plus the ctx->notifier for aio_notify();  
> other
> aio_poll()'s keep unchanged.
> 
> The biggest advantage over approaches [1] and [2] is, no change is needed in
> virtio-{blk,scsi}-dataplane code, also this doesn't depend on converting QMP 
> to
> coroutines.
> 
> The windows implementation is not tested except for compiling.
> 
> [1]: https://lists.gnu.org/archive/html/qemu-block/2015-05/msg00800.html
> [2]: http://lists.nongnu.org/archive/html/qemu-block/2015-06/msg00027.html
> 
> 
> Fam Zheng (11):
>   aio: Introduce "type" in aio_set_fd_handler and aio_set_event_notifier
>   aio: Save type to AioHandler
>   block: Mark fd handlers as "protocol"
>   nbd: Mark fd handlers client type as "nbd server"
>   aio: Mark ctx->notifier's client type as "context"
>   dataplane: Mark host notifiers' client type as "dataplane"
>   aio-posix: introduce aio_{disable,enable}_clients
>   aio-win32: Implement aio_{disable,enable}_clients
>   block: Introduce bdrv_aio_poll
>   block: Replace nested aio_poll with bdrv_aio_poll
>   block: Only poll block layer fds in bdrv_aio_poll
> 
>  aio-posix.c                     | 23 ++++++++++++++--
>  aio-win32.c                     | 22 +++++++++++++++-
>  async.c                         |  3 ++-
>  block.c                         |  2 +-
>  block/curl.c                    | 16 +++++++-----
>  block/io.c                      | 28 +++++++++++++-------
>  block/iscsi.c                   |  9 +++----
>  block/linux-aio.c               |  5 ++--
>  block/nbd-client.c              | 10 ++++---
>  block/nfs.c                     | 19 ++++++--------
>  block/qed-table.c               |  8 +++---
>  block/sheepdog.c                | 32 +++++++++++++++--------
>  block/ssh.c                     |  5 ++--
>  block/win32-aio.c               |  5 ++--
>  blockjob.c                      |  2 +-
>  hw/block/dataplane/virtio-blk.c |  6 +++--
>  hw/scsi/virtio-scsi-dataplane.c | 24 +++++++++++------
>  include/block/aio.h             | 33 +++++++++++++++++++++++
>  include/block/block.h           |  2 ++
>  nbd.c                           |  4 ++-
>  qemu-img.c                      |  2 +-
>  qemu-io-cmds.c                  |  4 +--
>  tests/test-aio.c                | 58 
> +++++++++++++++++++++++------------------
>  23 files changed, 219 insertions(+), 103 deletions(-)

What is the status of this series?

Paolo's points about Patch 11 & 12 stand.  It seems the fd type concept
is good but the bdrv_aio_poll() mechanism requires changes to fully
solve the problem.

Stefan



reply via email to

[Prev in Thread] Current Thread [Next in Thread]