qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [PATCH v2 11/18] block/backup: upgrade copy_bitmap to B


From: John Snow
Subject: Re: [Qemu-block] [PATCH v2 11/18] block/backup: upgrade copy_bitmap to BdrvDirtyBitmap
Date: Fri, 5 Jul 2019 12:52:56 -0400
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2


On 7/4/19 1:29 PM, Max Reitz wrote:
> On 03.07.19 23:55, John Snow wrote:
>> This simplifies some interface matters; namely the initialization and
>> (later) merging the manifest back into the sync_bitmap if it was
>> provided.
>>
>> Signed-off-by: John Snow <address@hidden>
>> ---
>>  block/backup.c | 76 ++++++++++++++++++++++++--------------------------
>>  1 file changed, 37 insertions(+), 39 deletions(-)
>>
>> diff --git a/block/backup.c b/block/backup.c
>> index d7fdafebda..9cc5a7f6ca 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
> 
> [...]
> 
>> @@ -202,7 +204,7 @@ static int coroutine_fn backup_do_cow(BackupBlockJob 
>> *job,
>>      cow_request_begin(&cow_request, job, start, end);
>>  
>>      while (start < end) {
>> -        if (!hbitmap_get(job->copy_bitmap, start)) {
>> +        if (!bdrv_dirty_bitmap_get(job->copy_bitmap, start)) {
>>              trace_backup_do_cow_skip(job, start);
>>              start += job->cluster_size;
>>              continue; /* already copied */
>> @@ -296,14 +298,16 @@ static void backup_abort(Job *job)
>>  static void backup_clean(Job *job)
>>  {
>>      BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
>> +    BlockDriverState *bs = blk_bs(s->target);
> 
> I’d prefer to call it target_bs, because “bs” is usually the source node.
> 

Sure;

> Which brings me to a question: Why is the copy bitmap assigned to the
> target in the first place?  Wouldn’t it make more sense on the source?
> 

*cough*;

the idea was that the target is *most likely* to be the temporary node
created for the purpose of this backup, even though backup does not
explicitly create the node.

So ... by creating it on the target, we avoid cluttering up the results
in block query; and otherwise it doesn't actually matter what node we
created it on, really.

I can move it over to the source, but the testing code has to get a
little smarter in order to find the "right" anonymous bitmap, which is
not impossible; but I thought this would actually be a convenience for
people.

>> +
>> +    if (s->copy_bitmap) {
>> +        bdrv_release_dirty_bitmap(bs, s->copy_bitmap);
>> +        s->copy_bitmap = NULL;
>> +    }
>> +
>>      assert(s->target);
>>      blk_unref(s->target);
>>      s->target = NULL;
>> -
>> -    if (s->copy_bitmap) {
>> -        hbitmap_free(s->copy_bitmap);
>> -        s->copy_bitmap = NULL;
>> -    }
>>  }
>>  
>>  void backup_do_checkpoint(BlockJob *job, Error **errp)
> 
> [...]
> 
>> @@ -387,59 +391,49 @@ static bool bdrv_is_unallocated_range(BlockDriverState 
>> *bs,
>>  
>>  static int coroutine_fn backup_loop(BackupBlockJob *job)
>>  {
>> -    int ret;
>>      bool error_is_read;
>>      int64_t offset;
>> -    HBitmapIter hbi;
>> +    BdrvDirtyBitmapIter *bdbi;
>>      BlockDriverState *bs = blk_bs(job->common.blk);
>> +    int ret = 0;
>>  
>> -    hbitmap_iter_init(&hbi, job->copy_bitmap, 0);
>> -    while ((offset = hbitmap_iter_next(&hbi)) != -1) {
>> +    bdbi = bdrv_dirty_iter_new(job->copy_bitmap);
>> +    while ((offset = bdrv_dirty_iter_next(bdbi)) != -1) {
>>          if (job->sync_mode == MIRROR_SYNC_MODE_TOP &&
>>              bdrv_is_unallocated_range(bs, offset, job->cluster_size))
>>          {
>> -            hbitmap_reset(job->copy_bitmap, offset, job->cluster_size);
>> +            bdrv_set_dirty_bitmap(job->copy_bitmap, offset, 
>> job->cluster_size);
> 
> Should be reset.
> 

Whoa, oops! I got through testing FULL but, clearly, not TOP. This also
doesn't trigger any other failures in our test suite, so I need to make
sure this is being covered.

Thank you.

>>              continue;
>>          }
>>  
>>          do {
>>              if (yield_and_check(job)) {
>> -                return 0;
>> +                goto out;
>>              }
>>              ret = backup_do_cow(job, offset,
>>                                  job->cluster_size, &error_is_read, false);
>>              if (ret < 0 && backup_error_action(job, error_is_read, -ret) ==
>>                             BLOCK_ERROR_ACTION_REPORT)
>>              {
>> -                return ret;
>> +                goto out;
>>              }
>>          } while (ret < 0);
>>      }
>>  
>> -    return 0;
>> + out:
>> +    bdrv_dirty_iter_free(bdbi);
>> +    return ret;
>>  }
>>  
>>  /* init copy_bitmap from sync_bitmap */
>>  static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
>>  {
>> -    uint64_t offset = 0;
>> -    uint64_t bytes = job->len;
>> -
>> -    while (bdrv_dirty_bitmap_next_dirty_area(job->sync_bitmap,
>> -                                             &offset, &bytes))
>> -    {
>> -        hbitmap_set(job->copy_bitmap, offset, bytes);
>> -
>> -        offset += bytes;
>> -        if (offset >= job->len) {
>> -            break;
>> -        }
>> -        bytes = job->len - offset;
>> -    }
>> +    bdrv_dirty_bitmap_merge_internal(job->copy_bitmap, job->sync_bitmap,
>> +                                     NULL, true);
> 
> How about asserting that this was successful?
> 

Sure.

>>  
>>      /* TODO job_progress_set_remaining() would make more sense */
>>      job_progress_update(&job->common.job,
>> -        job->len - hbitmap_count(job->copy_bitmap));
>> +                        job->len - bdrv_get_dirty_count(job->copy_bitmap));
>>  }
>>  
>>  static int coroutine_fn backup_run(Job *job, Error **errp)
> 
> [...]
> 
>> @@ -670,7 +668,7 @@ BlockJob *backup_job_create(const char *job_id, 
>> BlockDriverState *bs,
>>   error:
>>      if (copy_bitmap) {
>>          assert(!job || !job->copy_bitmap);
>> -        hbitmap_free(copy_bitmap);
>> +        bdrv_release_dirty_bitmap(bs, copy_bitmap);
> 
> If you decide to keep the copy_bitmap on the target, s/bs/target/.

Ah, heck. Clearly I didn't test this error pathway either. I'll have to
add some more tests to make sure the error recovery works right.

> 
> Max
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]