qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v12 07/17] qmp: Add support of "dirty-bitmap" sy


From: John Snow
Subject: Re: [Qemu-devel] [PATCH v12 07/17] qmp: Add support of "dirty-bitmap" sync mode for drive-backup
Date: Fri, 13 Feb 2015 13:35:21 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.4.0



On 02/13/2015 12:33 PM, Vladimir Sementsov-Ogievskiy wrote:
On 10.02.2015 04:35, John Snow wrote:
......
@@ -278,28 +305,61 @@ static void coroutine_fn backup_run(void *opaque)
              qemu_coroutine_yield();
              job->common.busy = true;
          }
+    } else if (job->sync_mode == MIRROR_SYNC_MODE_DIRTY_BITMAP) {
+        /* Dirty Bitmap sync has a slightly different iteration
method */
+        HBitmapIter hbi;
+        int64_t sector;
+        int64_t cluster;
+        int64_t last_cluster = -1;
+        bool polyrhythmic;
+
+        bdrv_dirty_iter_init(bs, job->sync_bitmap, &hbi);
+        /* Does the granularity happen to match our backup cluster
size? */
+        polyrhythmic =
(bdrv_dirty_bitmap_granularity(job->sync_bitmap) !=
+                        BACKUP_CLUSTER_SIZE);
let it be false, i.e. granularity == cluster
+
+        /* Find the next dirty /sector/ and copy that /cluster/ */
+        while ((sector = hbitmap_iter_next(&hbi)) != -1) {
then, don't we skip here the very first cluster, if it is dirty?

I don't think so. The HBitmapIterator is still in its initial state here, and it can and will return 0 if the first sector is dirty.

The iotest submitted in v12 tests writes to the very first sector, and I just verified it quickly that this function *does* return 0 for the first go-around.

+            if (yield_and_check(job)) {
+                goto leave;
+            }
+            cluster = sector / BACKUP_SECTORS_PER_CLUSTER;
+
+            /* Play some catchup with the progress meter */
+            if (cluster != last_cluster + 1) {
+                job->common.offset += ((cluster - last_cluster - 1) *
+                                       BACKUP_CLUSTER_SIZE);
+            }
+
+            do {
+                ret = backup_do_cow(bs, cluster *
BACKUP_SECTORS_PER_CLUSTER,
+                                    BACKUP_SECTORS_PER_CLUSTER,
&error_is_read);
+                if ((ret < 0) &&
+                    backup_error_action(job, error_is_read, -ret) ==
+                    BLOCK_ERROR_ACTION_REPORT) {
+                    goto leave;
+                }
+            } while (ret < 0);
+
+            /* Advance (or rewind) our iterator if we need to. */
+            if (polyrhythmic) {
+                bdrv_set_dirty_iter(&hbi,
+                                    (cluster + 1) *
BACKUP_SECTORS_PER_CLUSTER);
+            }
+
+            last_cluster = cluster;
+        }
+
+        /* Play some final catchup with the progress meter */
+        if (last_cluster + 1 < end) {
+            job->common.offset += ((end - last_cluster - 1) *
+                                   BACKUP_CLUSTER_SIZE);
+        }
+
      } else {


--
—js



reply via email to

[Prev in Thread] Current Thread [Next in Thread]