[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 09/17] block: Refactor bdrv_has_zero_init{,_truncate}

From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH 09/17] block: Refactor bdrv_has_zero_init{,_truncate}
Date: Wed, 5 Feb 2020 17:25:37 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.2.1

05.02.2020 17:07, Eric Blake wrote:
On 2/5/20 1:51 AM, Vladimir Sementsov-Ogievskiy wrote:

+typedef enum {
+    /*
+     * bdrv_known_zeroes() should include this bit if the contents of
+     * a freshly-created image with no backing file reads as all
+     * zeroes without any additional effort.  If .bdrv_co_truncate is
+     * set, then this must be clear if BDRV_ZERO_TRUNCATE is clear.

I understand that this is preexisting logic, but could I ask: why?
What's wrong
if driver can guarantee that created file is all-zero, but is not sure
file resizing? I agree that it's normal for these flags to have the same
but what is the reason for this restriction?..

If areas added by truncation (or growth, rather) are always zero, then
the file can always be created with size 0 and grown from there.  Thus,
images where truncation adds zeroed areas will generally always be zero
after creation.

This means, that if truncation bit is set, than create bit should be set.. But
here we say that if truncation is clear, than create bit must be clear.

Max, did we get the logic backwards?

So, the only possible combination of flags, when they differs, is
create=0 and
truncate=1.. How is it possible?

For preallocated qcow2 images, it depends on the storage whether they
are actually 0 after creation.  Hence qcow2_has_zero_init() then defers
to bdrv_has_zero_init() of s->data_file->bs.

But when you truncate them (with PREALLOC_MODE_OFF, as
BlockDriver.bdrv_has_zero_init_truncate()’s comment explains), the new
area is always going to be 0, regardless of initial preallocation.

ah yes, due to qcow2 zero clusters.

Hmm. Do we actually set the zero flag on unallocated clusters when resizing a 
qcow2 image?  That would be an O(n) operation (we have to visit the L2 entry 
for each added cluster, even if only to set the zero cluster bit).  Or do we 
instead just rely on the fact that qcow2 is inherently sparse, and that when 
you resize the guest-visible size without writing any new clusters, then it is 
only subsequent guest access to those addresses that finally allocate clusters, 
making resize O(1) (update the qcow2 metadata cluster, but not any L2 tables) 
while still reading 0 from the new data.  To some extent, that's what the 
allocation mode is supposed to control.

We must mark as ZERO new cluster at least if there is a _larger_ backing file, 
to prevent data from backing file become available for the guest. But we don't 
do it. It's a bug and there is fixing series from Kevin, I've just pinged it:
"[PATCH for-4.2? v3 0/8] block: Fix resize (extending) of short overlays"

What about with external data images, where a resize in guest-visible length 
requires a resize of the underlying data image?  There, we DO have to worry 
about whether the data image resizes with zeroes (as in the filesystem) or with 
random data (as in a block device).

Best regards,

reply via email to

[Prev in Thread] Current Thread [Next in Thread]