qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 26/27] block/parallels: optimize linear image ex


From: Stefan Hajnoczi
Subject: Re: [Qemu-devel] [PATCH 26/27] block/parallels: optimize linear image expansion
Date: Thu, 23 Apr 2015 10:26:11 +0100
User-agent: Mutt/1.5.23 (2014-03-12)

On Wed, Apr 22, 2015 at 05:25:14PM +0300, Denis V. Lunev wrote:
> On 22/04/15 17:18, Stefan Hajnoczi wrote:
> >On Wed, Mar 11, 2015 at 01:28:20PM +0300, Denis V. Lunev wrote:
> >>Plain image expansion spends a lot of time to update image file size.
> >>This seriously affects the performance. The following simple test
> >>   qemu_img create -f parallels -o cluster_size=64k ./1.hds 64G
> >>   qemu_io -n -c "write -P 0x11 0 1024M" ./1.hds
> >>could be improved if the format driver will pre-allocate some space
> >>in the image file with a reasonable chunk.
> >>
> >>This patch preallocates 128 Mb using bdrv_write_zeroes, which should
> >>normally use fallocate() call inside. Fallback to older truncate()
> >>could be used as a fallback using image open options thanks to the
> >>previous patch.
> >>
> >>The benefit is around 15%.
> >qcow2 doesn't use bdrv_truncate() at all.  It simply writes past the end
> >of bs->file to grow the file.  Can you use this approach instead?
> this is worse from performance point of view.
> 
> OK, there is no difference if big write will come from guest. In
> this case single write will do the job just fine. Though if the
> file will be extended by several different write the situation
> will be different. Each write will update inode metadata.
> Welcome journal write. This metadata update will cost us
> even more in the case of network filesystem and much more
> in the case of distributed filesystem (additional MDS write
> transaction at least).
> 
> This is the main reason to follow this approach here.

You are right, this seems like a good approach.

Attachment: pgpAz7Dxd_7Ed.pgp
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]