qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH] qcow2: add a readahead cache for qcow2_deco


From: Fam Zheng
Subject: Re: [Qemu-devel] [RFC PATCH] qcow2: add a readahead cache for qcow2_decompress_cluster
Date: Fri, 27 Dec 2013 11:23:29 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Thunderbird/24.2.0

On 2013年12月27日 00:19, Peter Lieven wrote:
while evaluatiing compressed qcow2 images as a good basis for
virtual machine templates I found out that there are a lot
of partly redundant (compressed clusters have common physical
sectors) and relatively short reads.

This doesn't hurt if the image resides on a local
filesystem where we can benefit from the local page cache,
but it adds a lot of penalty when accessing remote images
on NFS or similar exports.

This patch effectevily implements a readahead of 2 * cluster_size
which is 2 * 64kB per default resulting in 128kB readahead. This
is the common setting for Linux for instance.

For example this leads to the following times when converting
a compressed qcow2 image to a local tmpfs partition.

Old:
time ./qemu-img convert nfs://10.0.0.1/export/VC-Ubuntu-LTS-12.04.2-64bit.qcow2 
/tmp/test.raw
real    0m24.681s
user    0m8.597s
sys     0m4.084s

New:
time ./qemu-img convert nfs://10.0.0.1/export/VC-Ubuntu-LTS-12.04.2-64bit.qcow2 
/tmp/test.raw
real    0m16.121s
user    0m7.932s
sys     0m2.244s

Signed-off-by: Peter Lieven <address@hidden>
---
  block/qcow2-cluster.c |   27 +++++++++++++++++++++++++--
  block/qcow2.h         |    1 +
  2 files changed, 26 insertions(+), 2 deletions(-)

I like this idea, but here's a question. Actually, this penalty is common to all protocol drivers: curl, gluster, whatever. Readahead is not only good for compression processing, but also quite helpful for boot: BIOS and GRUB may send sequential 1 sector IO, synchronously, thus suffer from high latency of network communication. So I think if we want to do this, we will want to share it with other format and protocol combinations.

Fam




reply via email to

[Prev in Thread] Current Thread [Next in Thread]