qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 10/10] qcow2: Forward ZERO_WRITE flag for full preallocati


From: Eric Blake
Subject: Re: [PATCH v6 10/10] qcow2: Forward ZERO_WRITE flag for full preallocation
Date: Thu, 23 Apr 2020 11:15:14 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.7.0

On 4/23/20 11:04 AM, Kevin Wolf wrote:

Hmm.  When we get block status, it is very easy to tell that something reads
as zero when the qcow2 file has the zero bit set, but when the qcow2 file
does not have the zero bit set, we have to then query the format layer
whether it reads as zeros (which is expensive enough for some format layers
that we no longer report things as reading as zero). I'm worried that
optimizing this case could penalize later block status.

That's just how preallocation works. If you don't want that, you need
preallocation=off.

Good point. And if I recall, didn't we already have a discussion (or even patches) to optimize whether querying the format layer during block status was even worth the effort, depending on heuristics of the size of the format layer which in turn is based on whether there was preallocation? So not a show-stopper.


We already can tell the difference between a cluster that has the zero bit
set but is not preallocated, vs. has the zero bit set and is preallocated.
Are we really forcing a copy-on-write to a cluster that is marked zero but
preallocated?  Is the problem that we don't have a way to distinguish
between 'reads as zeroes, allocated, but we don't know state of format
layer' and 'reads as zeroes, allocated, and we know format layer reads as
zeroes'?

Basically, yes. If we had this, we could have a type of cluster where
writing to it still involves a metadata update (to clear the zero flag),
but never copy-on-write, even for partial writes.

I'm not sure if this would cover a very relevant case, though.

I also wonder if Berto's subcluster patches might play into this scenario.


Hmm - just noticing things: how does this series interplay with the existing
bdrv_has_zero_init_truncate?  Should this series automatically handle
BDRV_REQ_ZERO_WRITE on a bdrv_co_truncate(PREALLOC_NONE) request for all
drivers that report true, even if that driver does not advertise support for
the BDRV_REQ_ZERO_WRITE flag?

Hmm... It feels risky to me.

Or worded differently, is bdrv_has_zero_init_truncate even still necessary (when it is documented only to cover the PREALLOC_NONE case), or should we get rid of it in favor of using BDRV_REQ_ZERO_WRITE everywhere instead? (Which in turn involves visiting all drivers that previously advertised bdrv_has_zero_init_truncate... but I already have work in my tree trying to do that as part of preparing to add an autoclear bit to qcow2 to make it faster to report when a qcow2 image is known all-zero content...)

Looks like I'll be rebasing my work on top of this series.


+        } else {
+            ret = -1;
+        }

Here, ret == -1 does not imply whether errp is set - but annoyingly, errp
CAN be set if bdrv_co_truncate() failed.

+        if (ret < 0) {
+            ret = bdrv_co_truncate(bs->file, new_file_size, false, prealloc, 0,
+                                   errp);

And here, you are passing a possibly-set errp back to bdrv_co_truncate().
That is a bug that can abort.  You need to pass NULL to the first
bdrv_co_truncate() call or else clear errp prior to trying a fallback to
this second bdrv_co_truncate() call.

Yes, you're right. If nothing else comes up, I'll fix this while
applying.

Kevin


--
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org




reply via email to

[Prev in Thread] Current Thread [Next in Thread]