qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v2 15/26] qcow2: Add subcluster support to zero_in_l2_sli


From: Max Reitz
Subject: Re: [RFC PATCH v2 15/26] qcow2: Add subcluster support to zero_in_l2_slice()
Date: Mon, 4 Nov 2019 16:10:58 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.1.1

On 04.11.19 16:04, Max Reitz wrote:
> On 26.10.19 23:25, Alberto Garcia wrote:
>> Setting the QCOW_OFLAG_ZERO bit of the L2 entry is forbidden if an
>> image has subclusters. Instead, the individual 'all zeroes' bits must
>> be used.
>>
>> Signed-off-by: Alberto Garcia <address@hidden>
>> ---
>>  block/qcow2-cluster.c | 14 ++++++++++----
>>  1 file changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
>> index e67559152f..3e4ba8d448 100644
>> --- a/block/qcow2-cluster.c
>> +++ b/block/qcow2-cluster.c
>> @@ -1852,7 +1852,7 @@ static int zero_in_l2_slice(BlockDriverState *bs, 
>> uint64_t offset,
>>      assert(nb_clusters <= INT_MAX);
>>  
>>      for (i = 0; i < nb_clusters; i++) {
>> -        uint64_t old_offset;
>> +        uint64_t old_offset, l2_entry = 0;
>>          QCow2ClusterType cluster_type;
>>  
>>          old_offset = get_l2_entry(s, l2_slice, l2_index + i);
>> @@ -1869,12 +1869,18 @@ static int zero_in_l2_slice(BlockDriverState *bs, 
>> uint64_t offset,
>>  
>>          qcow2_cache_entry_mark_dirty(s->l2_table_cache, l2_slice);
>>          if (cluster_type == QCOW2_CLUSTER_COMPRESSED || unmap) {
>> -            set_l2_entry(s, l2_slice, l2_index + i, QCOW_OFLAG_ZERO);
>>              qcow2_free_any_clusters(bs, old_offset, 1, 
>> QCOW2_DISCARD_REQUEST);
> 
> It feels wrong to me to free the cluster before updating the L2 entry.

(Although it’s pre-existing, as set_l2_entry() is just an in-cache
operation anyway :-/)

Max

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]