qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 4/6] qemu-img: rebase: avoid unnecessary COW operations


From: Andrey Drobyshev
Subject: Re: [PATCH 4/6] qemu-img: rebase: avoid unnecessary COW operations
Date: Tue, 29 Aug 2023 16:27:29 +0300
User-agent: Mozilla Thunderbird

On 8/25/23 18:00, Hanna Czenczek wrote:
> On 01.06.23 21:28, Andrey Drobyshev via wrote:
>> When rebasing an image from one backing file to another, we need to
>> compare data from old and new backings.  If the diff between that data
>> happens to be unaligned to the target cluster size, we might end up
>> doing partial writes, which would lead to copy-on-write and additional
>> IO.
>>
>> Consider the following simple case (virtual_size == cluster_size == 64K):
>>
>> base <-- inc1 <-- inc2
>>
>> qemu-io -c "write -P 0xaa 0 32K" base.qcow2
>> qemu-io -c "write -P 0xcc 32K 32K" base.qcow2
>> qemu-io -c "write -P 0xbb 0 32K" inc1.qcow2
>> qemu-io -c "write -P 0xcc 32K 32K" inc1.qcow2
>> qemu-img rebase -f qcow2 -b base.qcow2 -F qcow2 inc2.qcow2
>>
>> While doing rebase, we'll write a half of the cluster to inc2, and block
>> layer will have to read the 2nd half of the same cluster from the base
>> image
>> inc1 while doing this write operation, although the whole cluster is
>> already
>> read earlier to perform data comparison.
>>
>> In order to avoid these unnecessary IO cycles, let's make sure every
>> write request is aligned to the overlay cluster size.
>>
>> Signed-off-by: Andrey Drobyshev <andrey.drobyshev@virtuozzo.com>
>> ---
>>   qemu-img.c | 72 +++++++++++++++++++++++++++++++++++++++---------------
>>   1 file changed, 52 insertions(+), 20 deletions(-)
>>
>> diff --git a/qemu-img.c b/qemu-img.c
>> index 60f4c06487..9a469cd609 100644
>> --- a/qemu-img.c
>> +++ b/qemu-img.c
>>>> [...]
>>>>               }
>>   +            /* At this point n must be aligned to the target
>> cluster size. */
>> +            if (offset + n < size) {
>> +                assert(n % bdi.cluster_size == 0);
> 
> This is not correct.  First, bdrv_is_allocated_above() operates not on
> the top image, but on images in the backing chain, which may have
> different cluster sizes and so may lead to `n`s that are not aligned to
> the top image’s cluster size:
> 
> $ ./qemu-img create -f qcow2 base.qcow2 64M
> $ ./qemu-img create -f qcow2 -b base.qcow2 -F qcow2 mid.qcow2 64M
> $ ./qemu-img create -f qcow2 -o cluster_size=2M -b mid.qcow2 -F qcow2
> top.qcow2 64M
> $ ./qemu-io -c 'write 64k 64k' mid.qcow2
> $ ./qemu-img rebase -b base.qcow2 top.qcow2
> qemu-img: ../qemu-img.c:3845: img_rebase: Assertion `n %
> bdi.cluster_size == 0' failed.
> [1]    636690 IOT instruction (core dumped)  ./qemu-img rebase -b
> base.qcow2 top.qcow2
> 
> Second, and this is a more theoretical thing, it would also be broken
> for images with cluster sizes greater than IO_BUF_SIZE.  Now,
> IO_BUF_SIZE is 2 MB, which happens to be precisely the maximum cluster
> size we support for qcow2, and for vmdk we always create images with 64
> kB clusters (I believe), but the vmdk code seems happy to open
> pre-existing images with cluster sizes up to 512 MB. Still, even for
> qcow2, we could easily increase the limit from 2 MB at any point, and
> there is no explicit correlation why IO_BUF_SIZE happens to be exactly
> what the current maximum cluster size for qcow2 is.  One way to get
> around this would be to use MAX(IO_BUF_SIZE, bdi.cluster_size) for the
> buffer size, which would give such an explicit correlation.
> 

I'm not sure whether blunt allocation of buffers up to 512M is the right
thing to do.  Since we need our buffers to be equal in size, we'd have
to take MAX(old backing cluster size, new backing cluster size, target
cluster size).  As for potential increase of qcow2 cluster size, I'd
simply increase IO_BUF_SIZE accordingly once it happens.

Overall, your first point is enough to drop simply drop that assert.

However, your remark gave me another idea that there's actually the 3rd
case when it gets broken, and that is images with subclusters, since in
this case bdrv_is_allocated_above() will align n to the subcluster size.
 While looking into what exactly qcow2_co_block_status() reports I
realized that my patch breaks the following:

> qemu-img create -f qcow2 -o cluster_size=1M base.qcow2 1M
> qemu-img create -f qcow2 -b base.qcow2 -F qcow2 -o 
> cluster_size=1M,extended_l2=on inc1.qcow2 1M
> qemu-img create -f qcow2 -b inc1.qcow2 -F qcow2 -o cluster_size=1M inc2.qcow2 
> 1M
> qemu-io -c 'write -P 0xaa 0 32K' -c 'write -P 0xbb 64K 32K' inc1.qcow2
> qemu-img rebase -b base.qcow2 -F qcow2 inc2.qcow2
> qemu-io -c "read -P 0xaa 0 32K" -c "read -P 0xbb 64K 32K" inc2.qcow2
>> read 32768/32768 bytes at offset 0
> 32 KiB, 1 ops; 00.00 sec (78.511 MiB/sec and 2512.3671 ops/sec)
> Pattern verification failed at offset 65536, 32768 bytes
> read 32768/32768 bytes at offset 65536
> 32 KiB, 1 ops; 00.00 sec (490.381 MiB/sec and 15692.1822 ops/sec)

That happens because n_old is bounded by n and we read not too small
data chunk (1st subcluster only).  Since we end up writing whole
clusters to the target anyway, the solution would probably be rounding n
up to the cluster size right after the call to bdrv_is_allocated_above():

>             if (prefix_chain_bs) {
>                 uint64_t bytes = n;
>             ...
>             }
> 
>             n = MIN(QEMU_ALIGN_UP(n, bdi.cluster_size), size - offset);
>
Now, if the target also has subclusters, we might end up allocating more
disk space than necessary (i.e. writing whole cluster instead of several
separate subclusters).  I'm not sure whether we should consider this as
well (aligning n to subcluster size?) or leave it as is keeping in mind
the trade-off between disk space and IO ops.

In any case I'll add the above scenario to the tests.

>
> [...]
>>> +                         */
>> +                        start = QEMU_ALIGN_DOWN(offset + written,
>> +                                                bdi.cluster_size);
> 
> Please add an assertion here that `start >= offset`.  I would rather
> have qemu-img crash than to write out-of-bounds memory data to disk.
> 
> I understand the idea is that this is given anyway because `offset`
> starts at 0 and we always check that `n`, by which we increment
> `offset`, is aligned, but it is absolutely critical that we don’t do an
> out-of-bounds access, so I feel an explicit assertion here is warranted.
> 
>> +                        end = QEMU_ALIGN_UP(offset + written + pnum,
>> +                                            bdi.cluster_size);
> 
> Similarly here, please assert that `end - offset` this does not exceed
> the buffer’s bounds.  I know the reasoning is the same, we ensured that
> `n` is aligned, so we can always safely align up `written + pnum`, but
> still.
> 

Agreed. Smth like:

>                          end = QEMU_ALIGN_UP(offset + written + pnum,
>                                              bdi.cluster_size);
>                          end = end > size ? size : end;
> +                        assert(offset <= start && start < end &&
> +                               end <= offset + IO_BUF_SIZE);
>                          ret = blk_pwrite(blk, start, end - start,
>                                           buf_old + (start - offset),
>                                           write_flags);







reply via email to

[Prev in Thread] Current Thread [Next in Thread]