qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Subject: [RFC PATCH v2] migration: calculate remaining page


From: Quan Xu
Subject: [Qemu-devel] Subject: [RFC PATCH v2] migration: calculate remaining pages accurately during the bulk stage
Date: Wed, 5 Sep 2018 22:17:01 +0800
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.0

From 7de4cc7c944bfccde0ef10992a7ec882fdcf0508 Mon Sep 17 00:00:00 2001
From: Quan Xu <address@hidden>
Date: Wed, 5 Sep 2018 22:06:58 +0800
Subject: [RFC PATCH v2] migration: calculate remaining pages accurately during the bulk stage

Since the bulk stage assumes in (migration_bitmap_find_dirty) that every
page is dirty, initialize the number of dirty pages at the beggining of
the iteration and then decrease it for each processed page.

Signed-off-by: Quan Xu <address@hidden>
---
 migration/ram.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/migration/ram.c b/migration/ram.c
index 79c8942..1a11436 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -290,6 +290,8 @@ struct RAMState {
     uint32_t last_version;
     /* We are in the first round */
     bool ram_bulk_stage;
+    /* Remaining bytes in the first round */
+    uint64_t ram_bulk_bytes;
     /* How many times we have dirty too many pages */
     int dirty_rate_high_cnt;
     /* these variables are used for bitmap sync */
@@ -1540,6 +1542,7 @@ unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb,

     if (rs->ram_bulk_stage && start > 0) {
         next = start + 1;
+        rs->ram_bulk_bytes -= TARGET_PAGE_SIZE;
     } else {
         next = find_next_bit(bitmap, size, start);
     }
@@ -2001,6 +2004,7 @@ static bool find_dirty_block(RAMState *rs, PageSearchStatus *pss, bool *again)
             /* Flag that we've looped */
             pss->complete_round = true;
             rs->ram_bulk_stage = false;
+            rs->ram_bulk_bytes = 0;
             if (migrate_use_xbzrle()) {
/* If xbzrle is on, stop using the data compression at this * point. In theory, xbzrle can do better than compression.
@@ -2513,6 +2517,7 @@ static void ram_state_reset(RAMState *rs)
     rs->last_page = 0;
     rs->last_version = ram_list.version;
     rs->ram_bulk_stage = true;
+    rs->ram_bulk_bytes = ram_bytes_total();
 }

 #define MAX_WAIT 50 /* ms, half buffered_file limit */
@@ -3308,7 +3313,7 @@ static void ram_save_pending(QEMUFile *f, void *opaque, uint64_t max_size,
         /* We can do postcopy, and all the data is postcopiable */
         *res_compatible += remaining_size;
     } else {
-        *res_precopy_only += remaining_size;
+        *res_precopy_only += remaining_size + rs->ram_bulk_bytes;
     }
 }

--
1.8.3.1





reply via email to

[Prev in Thread] Current Thread [Next in Thread]