[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-block] [PATCH 2/3] block/backup: avoid copying less than full targ

From: John Snow
Subject: [Qemu-block] [PATCH 2/3] block/backup: avoid copying less than full target clusters
Date: Fri, 12 Feb 2016 18:06:31 -0500

During incremental backups, if the target has a cluster size that is
larger than the backup cluster size and we are backing up to a target
that cannot (for whichever reason) pull clusters up from a backing image,
we may inadvertantly create unusable incremental backup images.

For example:

If the bitmap tracks changes at a 64KB granularity and we transmit 64KB
of data at a time but the target uses a 128KB cluster size, it is
possible that only half of a target cluster will be recognized as dirty
by the backup block job. When the cluster is allocated on the target
image but only half populated with data, we lose the ability to
distinguish between zero padding and uninitialized data.

This does not happen if the target image has a backing file that points
to the last known good backup.

Even if we have a backing file, though, it's likely going to be faster
to just buffer the redundant data ourselves from the live image than
fetching it from the backing file, so let's just always round up to the
target granularity.

Signed-off-by: John Snow <address@hidden>
 block/backup.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/block/backup.c b/block/backup.c
index fcf0043..62faf81 100644
--- a/block/backup.c
+++ b/block/backup.c
@@ -568,9 +568,16 @@ void backup_start(BlockDriverState *bs, BlockDriverState 
     job->on_target_error = on_target_error;
     job->target = target;
     job->sync_mode = sync_mode;
-    job->sync_bitmap = sync_mode == MIRROR_SYNC_MODE_INCREMENTAL ?
-                       sync_bitmap : NULL;
-    job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
+    if (sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
+        BlockDriverInfo bdi;
+        bdrv_get_info(job->target, &bdi);
+        job->sync_bitmap = sync_bitmap;
+        job->cluster_size = MAX(BACKUP_CLUSTER_SIZE_DEFAULT,
+                                bdi.cluster_size);
+    } else {
+        job->cluster_size = BACKUP_CLUSTER_SIZE_DEFAULT;
+    }
     job->sectors_per_cluster = job->cluster_size / BDRV_SECTOR_SIZE;
     job->common.len = len;
     job->common.co = qemu_coroutine_create(backup_run);

reply via email to

[Prev in Thread] Current Thread [Next in Thread]