qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH] util/hbitmap: fix unaligned reset


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] [PATCH] util/hbitmap: fix unaligned reset
Date: Mon, 5 Aug 2019 09:26:47 +0000

02.08.2019 22:21, John Snow wrote:
> 
> 
> On 8/2/19 2:58 PM, Vladimir Sementsov-Ogievskiy wrote:
>> hbitmap_reset is broken: it rounds up the requested region. It leads to
>> the following bug, which is shown by fixed test:
>>
>> assume granularity = 2
>> set(0, 3) # count becomes 4
>> reset(0, 1) # count becomes 2
>>
>> But user of the interface assume that virtual bit 1 should be still
>> dirty, so hbitmap should report count to be 4!
>>
>> In other words, because of granularity, when we set one "virtual" bit,
>> yes, we make all "virtual" bits in same chunk to be dirty. But this
>> should not be so for reset.
>>
>> Fix this, aligning bound correctly.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>> ---
>>
>> Hi all!
>>
>> Hmm, is it a bug or feature? :)
> 
> Very, very good question.
> 
>> I don't have a test for mirror yet, but I think that sync mirror may be 
>> broken
>> because of this, as do_sync_target_write() seems to be using unaligned reset.
>>
> 
> Honestly I was worried about this -- if you take a look at my patches
> where I add new bitmap sync modes, I bent over backwards to align
> requests for the sync=top bitmap initialization methods because I was
> worried about this possibly being the case.
> 
> 
> I'm not sure what the "right" behavior ought to be.
> 
> Let's say you have a granularity of 8 bytes:
> 
> if you reset 0-3 in one call, and then 4-7 in the next, what happens? If
> the caller naively thinks there's a 1:1 relationship, it might actually
> expect that to reflect a cleared bit. With alignment protection, we'll
> just fail to clear it both times and it remains set.
> 
> On the other hand, if you do allow partial clears, the first reset for
> 0-3 will toggle off 4-7 too, where we might rely on the fact that it's
> actually still dirty.
> 
> Whether or not that's dangerous depends on the context, and only the
> caller knows the context. I think we need to make the semantic effect of
> the reset "obvious" to the caller.
> 
> 
> I envision this:
> 
> - hbitmap_reset(bitmap, start, length)
>    returns -EINVAL if the range is not properly aligned

hbitmap_reset don't return, I thinks it should be an assertion

> 
> - hbitmap_reset_flags(bitmap, flags, start, length)
>    if (flags & HBITMAP_ALIGN_DOWN) align request to only full bits
>    if (flags & HBITMAP_ALIGN_UP) align request to cover any bit even
> partially touched by the specified range
>    otherwise, pass range through as-is to hbitmap_reset (and possibly get
> -EINVAL if caller did not align the request.)
> 
> 
> That way the semantics are always clear to the caller.

Hmm, I doubt, is there any use of ALIGN_UP? In most cases it's safe to thing 
that
something clear is dirty (and this is how hbitmap actually works on set/get), 
but
it seems always unsafe to ALIGN_UP reset..

So, I think that it should be default to ALIGN_DOWN, or just an assertion that 
request
is aligned (which anyway leads to implementing a helper 
hbitmap_reset_align_up)..

> 
> --js
> 
>>   tests/test-hbitmap.c |  2 +-
>>   util/hbitmap.c       | 24 +++++++++++++++++++-----
>>   2 files changed, 20 insertions(+), 6 deletions(-)
>>
>> diff --git a/tests/test-hbitmap.c b/tests/test-hbitmap.c
>> index 592d8219db..0008025a9f 100644
>> --- a/tests/test-hbitmap.c
>> +++ b/tests/test-hbitmap.c
>> @@ -424,7 +424,7 @@ static void test_hbitmap_granularity(TestHBitmapData 
>> *data,
>>       hbitmap_test_set(data, 0, 3);
>>       g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
>>       hbitmap_test_reset(data, 0, 1);
>> -    g_assert_cmpint(hbitmap_count(data->hb), ==, 2);
>> +    g_assert_cmpint(hbitmap_count(data->hb), ==, 4);
>>   }
>>   
>>   static void test_hbitmap_iter_granularity(TestHBitmapData *data,
>> diff --git a/util/hbitmap.c b/util/hbitmap.c
>> index 7905212a8b..61a813994a 100644
>> --- a/util/hbitmap.c
>> +++ b/util/hbitmap.c
>> @@ -473,15 +473,29 @@ void hbitmap_reset(HBitmap *hb, uint64_t start, 
>> uint64_t count)
>>   {
>>       /* Compute range in the last layer.  */
>>       uint64_t first;
>> -    uint64_t last = start + count - 1;
>> +    uint64_t last;
>> +    uint64_t end = start + count;
>> +    uint64_t gran = UINT64_C(1) << hb->granularity;
>>   
>> -    trace_hbitmap_reset(hb, start, count,
>> -                        start >> hb->granularity, last >> hb->granularity);
>> +    /*
>> +     * We should clear only bits, fully covered by requested region. 
>> Otherwise
>> +     * we may clear something that is actually still dirty.
>> +     */
>> +    first = DIV_ROUND_UP(start, gran);
>>   
>> -    first = start >> hb->granularity;
>> -    last >>= hb->granularity;
>> +    if (end == hb->orig_size) {
>> +        end = DIV_ROUND_UP(end, gran);
>> +    } else {
>> +        end = end >> hb->granularity;
>> +    }
>> +    if (end <= first) {
>> +        return;
>> +    }
>> +    last = end - 1;
>>       assert(last < hb->size);
>>   
>> +    trace_hbitmap_reset(hb, start, count, first, last);
>> +
>>       hb->count -= hb_count_between(hb, first, last);
>>       if (hb_reset_between(hb, HBITMAP_LEVELS - 1, first, last) &&
>>           hb->meta) {
>>


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]