[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] block: regression: savevm/delvm too slow
From: |
Kevin Wolf |
Subject: |
Re: [Qemu-devel] block: regression: savevm/delvm too slow |
Date: |
Wed, 22 Jun 2011 10:56:07 +0200 |
User-agent: |
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.17) Gecko/20110428 Fedora/3.1.10-1.fc15 Thunderbird/3.1.10 |
Am 21.06.2011 19:29, schrieb Luiz Capitulino:
> I'm getting the following times when doing a savevm and delvm in current
> HEAD eb47d7c5d (time in minutes, each time corresponds to a savevm/delvm run):
>
> savevm: 5:28m, 11:00m, 11:10m
> delvm: 4:30m, 4:40m, > 15m
>
> Now, trying with qemu 0.13.0 I get:
>
> savevm: < 1:00m, 4:00m, 4:34m
> delvm: few seconds for all cases
>
> Yes, you read it correctly, I tried with 0.13.0 because 0.14.0 also has the
> bug. This is the pattern I see when I run strace against HEAD while running
> the savevm command:
>
> pwrite(7,
> "\0\1\0\1\0\1\0\1\0\2\0\2\0\2\0\2\0\2\0\2\0\2\0\2\0\2\0\2\0\2\0\2"..., 65536,
> 196608) = 65536
> fdatasync(7) = 0
>
> Ie. a fdatasync() follows every single pwrite(). Something similar also
> happens
> with delvm. I don't see this pattern with 0.13.0.
>
> The good news is that I've tracked it down and Mr. git bisect says that:
>
> 29c1a7301af752de6721e031d31faa48887204bd is the first bad commit
> commit 29c1a7301af752de6721e031d31faa48887204bd
> Author: Kevin Wolf <address@hidden>
> Date: Mon Jan 10 17:17:28 2011 +0100
>
> qcow2: Use QcowCache
>
> Use the new functions of qcow2-cache.c for everything that works on
> refcount
> block and L2 tables.
>
> Signed-off-by: Kevin Wolf <address@hidden>
>
> :040000 040000 83e364185d37845bb27f1dccd1249d14cc7a9a1e
> 0c91964a52b5869333d4fb2cb0fa83104151359e M block
Let me guess... You're using cache=writethrough?
Previously, qcow2_update_snapshot_refcount() had implemented its own
kind of writeback cache that would be used even with cache=writethrough.
Now we're using the generic Qcow2Cache, which implements a write-through
behaviour for cache=writethrough.
We could fix this by temporarily switching the cache to writeback mode.
Kevin