qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] [PATCH 0/2] native Linux AIO support revisited


From: Christoph Hellwig
Subject: [Qemu-devel] [PATCH 0/2] native Linux AIO support revisited
Date: Thu, 20 Aug 2009 16:58:03 +0200
User-agent: Mutt/1.3.28i

This patchset introduces support native Linux AIO.  The first patch
just refactors the existing AIO emulation by thread pools to have
a cleaner layering which allows the native AIO support to be implemented
more easily.

The second patch introduces real native Linux AIO support, although due
to limitations in the kernel implementation we only can use it for
cache=none.  It is vaguely based on Anhony's earlier patches, but due to
the refactoring in the first patch is is much simpler.  Instead of
trying to fit into the model of the Posix AIO API we directly integrate
into the raw-posix code with a very lean interface (see the first patch
for a more detailed explanation).  That also means we can just register
the AIO completion eventd directly with the qemu poll handler instead of
needing an additional indirection.

The IO code performs slightly better than the thread pool on most
workloads I've thrown at it, and uses a lot less CPU time for it:

iozone -s 1024m -r $num -I -f /dev/sdb

output is in Kb/s:

        write 16k  read 16k  write 64k  read 64k  write 256k  read 256k
native      39133     75462     100980    156169      133642     168343
qemu        29998     48334      79870    116393      133090     161360
qemu+aio    32151     52902      82513    123893      133767     164113


dd if=/dev/zero of=$dev bs=20M oflag=direct count=400
dd if=$dev of=/dev/zero bs=20M iflag=direct count=400

output is in MB/s:

            write  read
native        116   123
qemu          116   100
qemu+aio      116   121

For all of this the AIO code used significantly less CPU time (no
coparism to native due to VM startup overhead and other issues)

                real        user       sys
qemu      25m45.885s  1m36.422s  1m49.394s
qemu+aio  25m36.950s  1m14.178s  1m13.179s

Note that the results have quite a bit of varions per run, so qemu+aio
beeing faster in one of the tests above shouldn't mean too much, it's
also been minimally slower in some.  From various runs I would say that
for larger block sizes we meat native performance, a little bit sooner
with AIO, and a little bit without. 

All thes results are on a raw host device and using virtio.  With image
files on a filesystems there are potential blocking points in the AIO
implementation. Those are relatively small or non-existant on already
allocated (and at least for XFS that includes preallocated) files, but
for spares files including waiting for disk I/O during allocations and
need to be avoided to not kill performance.  All the results also
already include the MSI support for virtio-blk, btw.

Based on this I would recommend to include this patch, but not use it by
default for now.  After some testing I would suggest to enable it by
default for host devices and investigate a way to make it easily usable
for files, possibly including some kernel support to tell us which files
are "safe".

These patches require my patch to my pthreads mandatory applies first,
which already is in Anthony's queue.  If you want to use them with
qemu-kvm you also need to backout the compatfd changed to raw-block.c
first.





reply via email to

[Prev in Thread] Current Thread [Next in Thread]