[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH 1/1] qcow2: avoid extra flushes in qcow2

From: Pavel Borzenkov
Subject: Re: [Qemu-block] [PATCH 1/1] qcow2: avoid extra flushes in qcow2
Date: Wed, 1 Jun 2016 14:35:32 +0300
User-agent: Mutt/1.6.1 (2016-04-27)

On Wed, Jun 01, 2016 at 12:07:01PM +0200, Kevin Wolf wrote:
> Am 01.06.2016 um 11:12 hat Denis V. Lunev geschrieben:
> > qcow2_cache_flush() calls bdrv_flush() unconditionally after writing
> > cache entries of a particular cache. This can lead to as many as
> > 2 additional fdatasyncs inside bdrv_flush.
> > 
> > We can simply skip all fdatasync calls inside qcow2_co_flush_to_os
> > as bdrv_flush for sure will do the job.
> This looked wrong at first because flushes are needed to keep the right
> order of writes to the different caches. However, I see that you keep
> the flush in qcow2_cache_flush_dependency(), so in the code this is
> actually fine.
> Can you make that more explicit in the commit message?
> > This seriously affects the
> > performance of database operations inside the guest.
> > 
> > Signed-off-by: Denis V. Lunev <address@hidden>
> > CC: Pavel Borzenkov <address@hidden>
> > CC: Kevin Wolf <address@hidden>
> > CC: Max Reitz <address@hidden>
> Do you have performance numbers for master and with your patch applied?
> (No performance related patch should come without numbers in its commit
> message!)

The problem with excessive flushing was found by a couple of performance tests:
  - parallel directory tree creation (from 2 processes)
  - 32 cached writes + fsync at the end in a loop

For the first one results improved from 2.6 loops/sec to 3.5 loops/sec.
Each loop creates 10^3 directories with 10 files in each.

For the second one results improved from ~600 fsync/sec to ~1100
fsync/sec. Though, it was run on SSD so it probably won't show such
performance gain on rotational media.

> What I find interesting is that this seems to help even though
> duplicated flushes should actually be really cheap because there is no
> new data that could be flushed in the second request. Makes me wonder if
> guests send duplicated flushes, too, and whether we should optimise
> that.

SSDs are affected by flushes a lot. Looks like flushes mess with their
allocation algorithms.

Also, we are not alone on the machine. Other processes might have
written some data after first flush already, so the second one might not
be that cheap after all (disk is going to wait for it to be written to
persistent media).


> Maybe it would also be interesting to measure how things perform if we
> removed the flush from qcow2_cache_flush_dependency(). This would be
> incorrect code (corruption possible after host crash), but I'd like to
> know how much performance we actually lose here. This is performance
> that could potentially be gained by using a journal.
> Kevin

reply via email to

[Prev in Thread] Current Thread [Next in Thread]