qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 21/27] block/parallels: no need to flush on each


From: Denis V. Lunev
Subject: Re: [Qemu-devel] [PATCH 21/27] block/parallels: no need to flush on each block allocation table update
Date: Wed, 22 Apr 2015 17:08:46 +0300
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:31.0) Gecko/20100101 Thunderbird/31.6.0

On 22/04/15 17:05, Stefan Hajnoczi wrote:
On Wed, Mar 11, 2015 at 01:28:15PM +0300, Denis V. Lunev wrote:
 From the point of guest each write to real disk prior to disk barrier
operation could be lost. Therefore there is no problem that "not synced"
new block is lost due to not updated allocation table if QEMU is crashed.
This situation is properly detected and handled now using inuse magic
and in parallels_check

This patch improves writing performance of
   qemu-img create -f parallels -o cluster_size=64k ./1.hds 64G
   qemu-io -f parallels -c "write -P 0x11 0 1024k" 1.hds
from 45 Mb/sec to 160 Mb/sec on my SSD disk. The gain on rotational media
is much more sufficient, from 800 Kb/sec to 45 Mb/sec.

Signed-off-by: Denis V. Lunev <address@hidden>
Reviewed-by: Roman Kagan <address@hidden>
CC: Kevin Wolf <address@hidden>
CC: Stefan Hajnoczi <address@hidden>
---
  block/parallels.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/parallels.c b/block/parallels.c
index bafc74b..2605c1a 100644
--- a/block/parallels.c
+++ b/block/parallels.c
@@ -118,7 +118,7 @@ static int64_t allocate_cluster(BlockDriverState *bs, 
int64_t sector_num)
      bdrv_truncate(bs->file, (pos + s->tracks) << BDRV_SECTOR_BITS);
s->bat_bitmap[idx] = cpu_to_le32(pos / s->off_multiplier);
-    ret = bdrv_pwrite_sync(bs->file,
+    ret = bdrv_pwrite(bs->file,
              sizeof(ParallelsHeader) + idx * sizeof(s->bat_bitmap[idx]),
              s->bat_bitmap + idx, sizeof(s->bat_bitmap[idx]));
      if (ret < 0) {
Please squash this into the write support patch.
ok



reply via email to

[Prev in Thread] Current Thread [Next in Thread]