[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Qemu-devel] [PATCH 47/47] mirror: support arbitrarily-sized iterations
From: |
Paolo Bonzini |
Subject: |
[Qemu-devel] [PATCH 47/47] mirror: support arbitrarily-sized iterations |
Date: |
Tue, 24 Jul 2012 13:04:25 +0200 |
Yet another optimization is to extend the mirroring iteration to include more
adjacent dirty blocks. This limits the number of I/O operations and makes
mirroring efficient even with a small granularity. Most of the infrastructure
is already in place; we only need to put a loop around the computation of
the origin and sector count of the iteration.
Signed-off-by: Paolo Bonzini <address@hidden>
---
block/mirror.c | 100 ++++++++++++++++++++++++++++++++++++++------------------
trace-events | 1 +
2 files changed, 69 insertions(+), 32 deletions(-)
diff --git a/block/mirror.c b/block/mirror.c
index 93e718f..87d97eb 100644
--- a/block/mirror.c
+++ b/block/mirror.c
@@ -127,7 +127,7 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob *s)
{
BlockDriverState *source = s->common.bs;
int nb_sectors, nb_sectors_chunk, nb_chunks;
- int64_t end, sector_num, cluster_num, next_sector, hbitmap_next_sector;
+ int64_t end, sector_num, next_cluster, next_sector, hbitmap_next_sector;
MirrorOp *op;
s->sector_num = hbitmap_iter_next(&s->hbi);
@@ -139,47 +139,83 @@ static void coroutine_fn mirror_iteration(MirrorBlockJob
*s)
}
hbitmap_next_sector = s->sector_num;
+ sector_num = s->sector_num;
+ nb_sectors_chunk = s->granularity >> BDRV_SECTOR_BITS;
+ end = s->common.len >> BDRV_SECTOR_BITS;
- /* If we have no backing file yet in the destination, and the cluster size
- * is very large, we need to do COW ourselves. The first time a cluster is
- * copied, copy it entirely.
+ /* Extend the QEMUIOVector to include all adjacent blocks that will
+ * be copied in this operation.
+ *
+ * We have to do this if we have no backing file yet in the destination,
+ * and the cluster size is very large. Then we need to do COW ourselves.
+ * The first time a cluster is copied, copy it entirely. Note that,
+ * because both the granularity and the cluster size are powers of two,
+ * the number of sectors to copy cannot exceed one cluster.
*
- * Because both the granularity and the cluster size are powers of two, the
- * number of sectors to copy cannot exceed one cluster.
+ * We also want to extend the QEMUIOVector to include more adjacent
+ * dirty blocks if possible, to limit the number of I/O operations and
+ * run efficiently even with a small granularity.
*/
- sector_num = s->sector_num;
- nb_sectors_chunk = nb_sectors = s->granularity >> BDRV_SECTOR_BITS;
- cluster_num = sector_num / nb_sectors_chunk;
- if (s->cow_bitmap && !test_bit(cluster_num, s->cow_bitmap)) {
- trace_mirror_cow(s, sector_num);
- bdrv_round_to_clusters(s->target,
- sector_num, nb_sectors_chunk,
- §or_num, &nb_sectors);
-
- /* The rounding may make us copy sectors before the
- * first dirty one.
- */
- cluster_num = sector_num / nb_sectors_chunk;
- }
+ nb_chunks = 0;
+ nb_sectors = 0;
+ next_sector = sector_num;
+ next_cluster = sector_num / nb_sectors_chunk;
/* Wait for I/O to this cluster (from a previous iteration) to be done. */
- while (test_bit(cluster_num, s->in_flight_bitmap)) {
+ while (test_bit(next_cluster, s->in_flight_bitmap)) {
trace_mirror_yield_in_flight(s, sector_num, s->in_flight);
qemu_coroutine_yield();
}
- end = s->common.len >> BDRV_SECTOR_BITS;
- nb_sectors = MIN(nb_sectors, end - sector_num);
- nb_chunks = (nb_sectors + nb_sectors_chunk - 1) / nb_sectors_chunk;
- while (s->buf_free_count < nb_chunks) {
- trace_mirror_yield_buf_busy(s, nb_chunks, s->in_flight);
- qemu_coroutine_yield();
- }
+ do {
+ int added_sectors, added_chunks;
- /* We have enough free space to copy these sectors. */
- if (s->cow_bitmap) {
- bitmap_set(s->cow_bitmap, cluster_num, nb_chunks);
- }
+ if (!bdrv_get_dirty(source, next_sector) ||
+ test_bit(next_cluster, s->in_flight_bitmap)) {
+ assert(nb_sectors > 0);
+ break;
+ }
+
+ added_sectors = nb_sectors_chunk;
+ if (s->cow_bitmap && !test_bit(next_cluster, s->cow_bitmap)) {
+ bdrv_round_to_clusters(s->target,
+ next_sector, added_sectors,
+ &next_sector, &added_sectors);
+
+ /* On the first iteration, the rounding may make us copy
+ * sectors before the first dirty one.
+ */
+ if (next_sector < sector_num) {
+ assert(nb_sectors == 0);
+ sector_num = next_sector;
+ next_cluster = next_sector / nb_sectors_chunk;
+ }
+ }
+
+ added_sectors = MIN(added_sectors, end - (sector_num + nb_sectors));
+ added_chunks = (added_sectors + nb_sectors_chunk - 1) /
nb_sectors_chunk;
+
+ /* When doing COW, it may happen that there are not enough free
+ * buffers to copy a full cluster. Wait if that is the case.
+ */
+ while (nb_chunks == 0 && s->buf_free_count < added_chunks) {
+ trace_mirror_yield_buf_busy(s, nb_chunks, s->in_flight);
+ qemu_coroutine_yield();
+ }
+ if (s->buf_free_count < nb_chunks + added_chunks) {
+ trace_mirror_break_buf_busy(s, nb_chunks, s->in_flight);
+ break;
+ }
+
+ /* We have enough free space to copy these sectors. */
+ if (s->cow_bitmap) {
+ bitmap_set(s->cow_bitmap, next_cluster, added_chunks);
+ }
+ nb_sectors += added_sectors;
+ nb_chunks += added_chunks;
+ next_sector += added_sectors;
+ next_cluster += added_chunks;
+ } while (next_sector < end);
/* Allocate a MirrorOp that is used as an AIO callback. */
op = g_slice_new(MirrorOp);
diff --git a/trace-events b/trace-events
index 7ae11e9..cd387fa 100644
--- a/trace-events
+++ b/trace-events
@@ -87,6 +87,7 @@ mirror_iteration_done(void *s, int64_t sector_num, int
nb_sectors) "s %p sector_
mirror_yield(void *s, int64_t cnt, int buf_free_count, int in_flight) "s %p
dirty count %"PRId64" free buffers %d in_flight %d"
mirror_yield_in_flight(void *s, int64_t sector_num, int in_flight) "s %p
sector_num %"PRId64" in_flight %d"
mirror_yield_buf_busy(void *s, int nb_chunks, int in_flight) "s %p requested
chunks %d in_flight %d"
+mirror_break_buf_busy(void *s, int nb_chunks, int in_flight) "s %p requested
chunks %d in_flight %d"
# blockdev.c
qmp_block_job_cancel(void *job) "job %p"
--
1.7.10.4
- [Qemu-devel] [PATCH 33/47] mirror: add support for on-source-error/on-target-error, (continued)
- [Qemu-devel] [PATCH 33/47] mirror: add support for on-source-error/on-target-error, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 32/47] block: forward bdrv_iostatus_reset to block job, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 34/47] qmp: add pull_event function, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 37/47] add hierarchical bitmap data type and test cases, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 39/47] block: make round_to_clusters public, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 40/47] mirror: perform COW if the cluster size is bigger than the granularity, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 46/47] mirror: support more than one in-flight AIO operation, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 47/47] mirror: support arbitrarily-sized iterations,
Paolo Bonzini <=
- [Qemu-devel] [PATCH 41/47] block: return count of dirty sectors, not chunks, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 36/47] host-utils: add ffsl and flsl, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 23/47] block: add target info to QMP query-blockjobs command, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 42/47] block: allow customizing the granularity of the dirty bitmap, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 17/47] qemu-iotests: add tests for streaming error handling, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 18/47] block: live snapshot documentation tweaks, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 22/47] block: make device optional in BlockInfo, Paolo Bonzini, 2012/07/24
- [Qemu-devel] [PATCH 30/47] mirror: implement completion, Paolo Bonzini, 2012/07/24