qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] Replace posix-aio with custom thread pool


From: Andrea Arcangeli
Subject: Re: [Qemu-devel] [RFC] Replace posix-aio with custom thread pool
Date: Thu, 11 Dec 2008 16:53:35 +0100

On Thu, Dec 11, 2008 at 04:24:37PM +0100, Gerd Hoffmann wrote:
> Well, linux kernel aio has its share of problems too:
> 
>   * Anthony mentioned it may block on certain circumstances (forgot
>     which ones), and you can't figure beforehand to turn off aio then.

We've worse problems as long as bdrv_read/write are used by qcow2. And
we can fix host kernel in the long run if this becomes an issue.

>   * It can't handle block allocation.  Kernel handles that by doing
>     such writes synchronously via VFS layer (instead of the separate
>     aio code paths).  Leads to horrible performance and bug reports
>     such as "installs on sparse files are very slow".

I think here you mean O_DIRECT regardless of aio/sync API, I doubt aio
has any relevance to block allocation in any way, so whatever problem
we have with kernel API and O_DIRECT should also be there with
sync-api + userland threads and O_DIRECT.

>   * support for vectored aio isn't that old.  IIRC it was added
>     somewhen around 2.6.20 (newer that current suse/redhat enterprise
>     versions).  Which IMHO means you can't expect it being present
>     unconditionally.

I think this is a false alarm: the whole point of kernel AIO is that
even if O_DIRECT is enabled, all bios are pushed to the disk before
the disk queue is unplugged which is all we care about to get decent
disk bandwidth with zerocopy dma. Or at least that's the way it's
supposed to work if aio is implemented correctly at the bio level.

So in kernels that don't support IOCB_CMD_READV/WRITEV, we've simply
to an array of iocb through io_submit (i.e. to conver the iov into a
vector of iocb, instead of a single iocb pointing to the
iov). Internally to io_submit a single dma command should be generated
and the same sg list should be built the same as if we used
READV/WRITEV. In theory READV/WRITEV should be just a cpu saving
feature, it shouldn't influence disk bandwidth. If it does, it means
the bio layer is broken and needs fixing.

If IOCB_CMD_READV/WRITEV is available, good, if not we go with
READ/WRITE and more iocb dynamically allocated. It just needs a
conversion routine from iovec, file, offset to iocb pointer when
IOCB_CMD_READV/WRITEV is not available. The iocb array can be
preallocated along with the iovec when we detect IOCB_CMD_READV/WRITEV
is not available, I've a cache layer that does this and I'll just
provide an output selectable in iovec or iocb terms, with iocb
selectable depending if host os is linux and IOCB_CMD_READV/WRITEV is
not available.

> Threads will be there anyway for kvm smp.

Yes, I didn't mean those threads ;), I love threads, but I love
threads that are CPU bound and allow to exploit the whole power of the
system! But for storage, threads are purely overscheduling overhead as
far as I can tell, given we've an async api to use and we already have
to deal with the pain of async programming. So it worth we get the
full benefit of it (i.e. no thread/overscheduling overhead).

If aio inside the kernel is too complex than use kernel threads, it's
still better than user threads.

I mean if we keep only using threads we should get rid of bdrv_aio*
completely and move qcow2 code in a separate thread instead of keep
running it from the io thread. If we stick to threads then it worth to
get the full benefit of threads (i.e. not having to deal with the
pains of async programming and moving the qcow2 computation in a
separate CPU). Something I tried doing but I ended up having to add
locks all over qcow2 in order to submit multiple qcow2 requests in
parallel (otherwise the lock would be global and I couldn't
differentiate between a bdrv_read for qcow2 metadata that must be
executed with the qcow2 mutex held, and a bdrv_aio_readv that can run
lockless from the point of view of the current qcow2 instance - the
qcow2 parent may take its own locks then etc..). Basically it breaks
all backends something I'm not confortable with right now just to get
zerocopy dma working at platter speed. Hence I stick with async
programming for now...

> Well, wait for glibc isn't going to fly.  glibc waits for posix, and
> posix waits for a reference implementation (which will not be glibc).

Agree.

> > and kernel with preadv/pwritev
> 
> With that in place you don't need kernel aio any more, then you can
> really do it in userspace with threads.  But that probably would be
> linux-only  ^W^W^W

Waiting for preadv/pwritev is just the 'quicker' version of waiting
glibc aio_readv. And because it remains a linux-only, I prefer kernel
AIO that fixes cfq and should be the most optimal anyway (with or
without READV/WRITEV support).

So in the end: we either open the file 64 times (which I think is
perfectly coherent with nfs unless the nfs client is broken, but then
Anthony may know nfs better, I'm not heavy nfs user here), or we go
with kernel AIO... you know my preference. Said that opening the file
64 times is probably simpler, if it has been confirmed that it doesn't
break nfs. Breaking nfs is not possible here, nfs is the ideal shared
storage for migration (we surely want to exploit the fact we need so
weak semantics we need to do a safe migration, that it worth to keep
nfs supported as 100% KVM reliable virtualization shared storage).

  > > ahem: http://www.daemon-systems.org/man/preadv.2.html > >

Too bad nobody implemented it yet...




reply via email to

[Prev in Thread] Current Thread [Next in Thread]