[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 09/17] block: Refactor bdrv_has_zero_init{,_truncate}

From: Eric Blake
Subject: Re: [PATCH 09/17] block: Refactor bdrv_has_zero_init{,_truncate}
Date: Tue, 4 Feb 2020 13:03:54 -0600
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.4.1

On 2/4/20 11:53 AM, Max Reitz wrote:
On 31.01.20 18:44, Eric Blake wrote:
Having two slightly-different function names for related purposes is
unwieldy, especially since I envision adding yet another notion of
zero support in an upcoming patch.  It doesn't help that
bdrv_has_zero_init() is a misleading name (I originally thought that a
driver could only return 1 when opening an already-existing image
known to be all zeroes; but in reality many drivers always return 1
because it only applies to a just-created image).

I don’t find it misleading, I just find it meaningless, which then makes
it open to interpretation (or maybe rather s/interpretation/wishful

Refactor all uses
to instead have a single function that returns multiple bits of
information, with better naming and documentation.

It doesn’t make sense to me.  How exactly is it unwieldy?  In the sense
that we have to deal with multiple rather small implementation functions
rather than a big one per driver?  Actually, multiple small functions
sounds better to me – unless the three implementations share common code.

Common code for dealing with encryption, backing files, and so on. It felt like I had a lot of code repetition when keeping functions separate.

As for the callers, they only want a single flag out of the three, don’t
they?  If so, it doesn’t really matter for them.

The qemu-img.c caller in patch 10 checks ZERO_CREATE | ZERO_OPEN, so we DO have situations of checking more than one bit, vs. needing two function calls.

In fact, I can imagine that drivers can trivially return
BDRV_ZERO_TRUNCATE information (because the preallocation mode is
fixed), whereas BDRV_ZERO_CREATE can be a bit more involved, and
BDRV_ZERO_OPEN could take even more time because some (constant-time)
inquiries have to be done.

In looking at the rest of the series, drivers were either completely trivial (in which case, declaring:

.bdrv_has_zero_init = bdrv_has_zero_init_1,
.bdrv_has_zero_init_truncate = bdrv_has_zero_init_1,

was a lot wordier than the new:

.bdrv_known_zeroes = bdrv_known_zeroes_truncate,

), or completely spelled out but where both creation and truncation were determined in the same amount of effort.

And thus callers which just want the trivially obtainable
BDRV_ZERO_TRUNCATE info have to wait for the BDRV_ZERO_OPEN inquiry,
even though they don’t care about that flag.

True, but only to a minor extent; and the documentation mentions that the BDRV_ZERO_OPEN calculation MUST NOT be as expensive as a blind block_status loop. Meanwhile, callers tend to only care about bdrv_known_zeroes() right after opening an image or right before resizing (not repeatedly during runtime); and you also argued elsewhere in this thread that it may be worth having the block layer cache BDRV_ZERO_OPEN and update the cache on any write, at which point, the expense in the driver callback really is a one-time call during bdrv_co_open(). And in that case, whether the one-time expense is done via a single function call or via three driver callbacks, the amount of work is the same; but the driver callback interface is easier if there is only one callback (similar to how bdrv_unallocated_blocks_are_zero() calls bdrv_get_info() only for bdi.unallocated_blocks_are_zero, even though BlockDriverInfo tracks much more than that boolean).

In fact, it may be worth consolidating known zeroes support into BlockDriverInfo.

So I’d leave it as separate functions so drivers can feel free to have
implementations for BDRV_ZERO_OPEN that take more than mere microseconds
but that are more accurate.

(Or maybe if you really want it to be a single functions, callers could
pass the mask of flags they care about.  If all flags are trivially
obtainable, the implementations would then simply create their result
mask and & it with the caller-given mask.  For implementations where
some branches could take a bit more time, those branches are only taken
when the caller cares about the given flag.  But again, I don’t
necessarily think having a single function is more easily handleable
than three smaller ones.)

Those are still viable options, but before I repaint the bikeshed along those lines, I'd at least like a review of whether the overall idea of having a notion of 'reads-all-zeroes' is indeed useful enough, regardless of how we implement it as one vs. three driver callbacks.

Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

reply via email to

[Prev in Thread] Current Thread [Next in Thread]