qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] callout to *file in bdrv_co_get_block_stat


From: Peter Lieven
Subject: Re: [Qemu-block] [Qemu-devel] callout to *file in bdrv_co_get_block_status
Date: Mon, 27 Mar 2017 15:21:57 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1

Am 20.03.2017 um 17:56 schrieb Paolo Bonzini:

On 20/03/2017 17:43, Peter Lieven wrote:
Am 20.03.2017 um 15:05 schrieb Paolo Bonzini:
On 20/03/2017 14:35, Peter Lieven wrote:
Am 20.03.2017 um 14:23 schrieb Paolo Bonzini:
On 20/03/2017 14:13, Peter Lieven wrote:
Am 20.03.2017 um 13:47 schrieb Peter Lieven:
commit 5daa74a6ebce7543aaad178c4061dc087bb4c705
Author: Paolo Bonzini <address@hidden>
Date:   Wed Sep 4 19:00:38 2013 +0200

     block: look for zero blocks in bs->file
Reviewed-by: Eric Blake <address@hidden>
     Signed-off-by: Paolo Bonzini <address@hidden>
     Signed-off-by: Stefan Hajnoczi <address@hidden>


It was introduced while introducing bdv_get_block_status. I don't know what the 
real

issue was that was addressed with this patch?
Is it possible that this optimization was added especially for RAW? I was 
believing that
raw would forward the get_block_status call to bs->file, but it looks it 
doesn't.
If this one here was for RAW would it be an option to move this callout to the 
raw-format driver
and remove it from the generic code?
It was meant for both raw and qcow2.
Okay, but as Fam mentioned qcow2 Metadata should know that a cluster is zero. 
Do you remember
what the issue was?
I said that already---preallocated metadata.  Also, at the time
pre-qcow2v3 was more important.
Yes, but Fam said that with preallocated metadata the clusters should be zero, 
or was that
not true before qcow2v3?
Zero clusters didn't exist before qcow2v3 I think.

Are you using libiscsi, block devices or files?
Its a mixture. raw with libiscsi or lvm and qcow2 and vmdk either with libnfs 
or on local storage.

I stumbled across the issue with lseek on a tmpfs because in the build process 
for our templates
I temporarily have vmdks on a tmpfs and it takes ages before qemu-img convert 
starts to run (it iterates
over every 64kb cluster with that callout to find_allocation and for some 
reason lseek is very slow on tmpfs).
Ok, thanks.  Perhaps it's worth benchmarking tmpfs specifically.  Apart
from the system call overhead (which does not really matter if you're
going to do a read), lseek on other filesystems should not be any slower
than read.

Okay, but the even the read is not really necessary if the metadata is correct?
Would it be an idea to introduce an inverse flag live BDRV_BLOCK_NOT_ZERO for
cases where we know that there is really DATA and thus can avoid the second 
callout?

Peter




reply via email to

[Prev in Thread] Current Thread [Next in Thread]