qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 01/11] block/backup: simplify backup_incremen


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-devel] [PATCH v5 01/11] block/backup: simplify backup_incremental_init_copy_bitmap
Date: Mon, 14 Jan 2019 14:01:50 +0000

14.01.2019 16:10, Max Reitz wrote:
> On 29.12.18 13:20, Vladimir Sementsov-Ogievskiy wrote:
>> Simplify backup_incremental_init_copy_bitmap using the function
>> bdrv_dirty_bitmap_next_dirty_area.
>>
>> Note: move to job->len instead of bitmap size: it should not matter but
>> less code.
>>
>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>> ---
>>   block/backup.c | 40 ++++++++++++----------------------------
>>   1 file changed, 12 insertions(+), 28 deletions(-)
> 
> Overall: What is this function even supposed to do?  To me, it looks
> like it marks all areas in job->copy_bitmap dirty that are dirty in
> job->sync_bitmap.
> 
> If so, wouldn't just replacing this by hbitmap_merge() simplify things
> further?
> 
>> diff --git a/block/backup.c b/block/backup.c
>> index 435414e964..fbe7ce19e1 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
>> @@ -406,43 +406,27 @@ static int coroutine_fn 
>> backup_run_incremental(BackupBlockJob *job)
> 
> [...]
> 
>> +    while (bdrv_dirty_bitmap_next_dirty_area(job->sync_bitmap,
>> +                                             &offset, &bytes))
>> +    {
>> +        uint64_t cluster = offset / job->cluster_size;
>> +        uint64_t last_cluster = (offset + bytes) / job->cluster_size;
>>   
>> -        next_cluster = DIV_ROUND_UP(offset, job->cluster_size);
>> -        hbitmap_set(job->copy_bitmap, cluster, next_cluster - cluster);
>> -        if (next_cluster >= end) {
>> +        hbitmap_set(job->copy_bitmap, cluster, last_cluster - cluster + 1);
> 
> Why the +1?  Shouldn't the division for last_cluster round up instead?
> 
>> +
>> +        offset = (last_cluster + 1) * job->cluster_size;
> 
> Same here.

last cluster is not "end", but it's last dirty cluster. so number of dirty 
clusters is last_cluster - cluster + 1, and next offset is calculated through 
+1 too.

If I round up division result, I'll get last for most cases, but "end" (next 
after the last), for the case when offset % job->cluster_size == 0, so, how to 
use it?


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]