[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH 1/3] block: ignore flush requests w

From: Evgeny Yakovlev
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH 1/3] block: ignore flush requests when storage is clean
Date: Fri, 24 Jun 2016 18:54:28 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.8.0

On 24.06.2016 18:31, Eric Blake wrote:
On 06/24/2016 09:06 AM, Denis V. Lunev wrote:
From: Evgeny Yakovlev <address@hidden>

Some guests (win2008 server for example) do a lot of unnecessary
flushing when underlying media has not changed. This adds additional
overhead on host when calling fsync/fdatasync.

This change introduces a dirty flag in BlockDriverState which is set
in bdrv_set_dirty and is checked in bdrv_co_flush. This allows us to
avoid unnesessary flushing when storage is clean.
s/unnesessary/unnecessary/ (I pointed this out against v2
which makes me wonder if anything else was missed)

Yeah, i fixed that but messed up committing a change in commit message. Will be fixed in rebased version.

The problem with excessive flushing was found by a performance test
which does parallel directory tree creation (from 2 processes).
Results improved from 0.424 loops/sec to 0.432 loops/sec.
Each loop creates 10^3 directories with 10 files in each.

Signed-off-by: Evgeny Yakovlev <address@hidden>
Signed-off-by: Denis V. Lunev <address@hidden>
CC: Kevin Wolf <address@hidden>
CC: Max Reitz <address@hidden>
CC: Stefan Hajnoczi <address@hidden>
CC: Fam Zheng <address@hidden>
CC: John Snow <address@hidden>
+++ b/include/block/block_int.h
@@ -418,6 +418,8 @@ struct BlockDriverState {
      int sg;        /* if true, the device is a /dev/sg* */
      int copy_on_read; /* if true, copy read backing sectors into image
                           note this is a reference count */
+    bool dirty;
      bool probed;
Conflicts with the current state of Kevin's block branch (due to my
reordering and conversion of bool parameters); so you'll want to rebase.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]