qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 13/18] arch_init: adjust ram_save_setup() for mi


From: Lei Li
Subject: Re: [Qemu-devel] [PATCH 13/18] arch_init: adjust ram_save_setup() for migrate_is_localhost
Date: Fri, 23 Aug 2013 14:25:36 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/17.0 Thunderbird/17.0

On 08/21/2013 06:48 PM, Paolo Bonzini wrote:
Il 21/08/2013 09:18, Lei Li ha scritto:
Send all the ram blocks hooked by save_page, which will copy
ram page and MADV_DONTNEED the page just copied.
You should implement this entirely in the hook.

It will be a little less efficient because of the dirty bitmap overhead,
but you should aim at having *zero* changes in arch_init.c and migration.c.

Yes, the reason I modify the migration_thread() to have new process that send 
all
the ram pages in adjusted qemu_savevm_state_begin stage and send device states 
in
qemu_savevm_device_state stage for localhost migration is to avoid the bitmap 
thing,
which is a little less efficient just like you mentioned above.

The performance assurance is very important to this feature, our goal is 100ms
of downtime for a 1TB guest.


Paolo

Signed-off-by: Lei Li <address@hidden>
---
  arch_init.c |   19 +++++++++++++------
  1 files changed, 13 insertions(+), 6 deletions(-)

diff --git a/arch_init.c b/arch_init.c
index 434a4ca..cbbb4db 100644
--- a/arch_init.c
+++ b/arch_init.c
@@ -474,7 +474,7 @@ static int ram_save_block(QEMUFile *f, bool last_stage)
              /* In doubt sent page as normal */
              bytes_sent = -1;
              ret = ram_control_save_page(f, block->offset,
-                               offset, TARGET_PAGE_SIZE, &bytes_sent);
+                                        offset, TARGET_PAGE_SIZE, &bytes_sent);
if (ret != RAM_SAVE_CONTROL_NOT_SUPP) {
                  if (ret != RAM_SAVE_CONTROL_DELAYED) {
@@ -613,11 +613,13 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
      RAMBlock *block;
      int64_t ram_pages = last_ram_offset() >> TARGET_PAGE_BITS;
- migration_bitmap = bitmap_new(ram_pages);
-    bitmap_set(migration_bitmap, 0, ram_pages);
-    migration_dirty_pages = ram_pages;
-    mig_throttle_on = false;
-    dirty_rate_high_cnt = 0;
+    if (!migrate_is_localhost()) {
+        migration_bitmap = bitmap_new(ram_pages);
+        bitmap_set(migration_bitmap, 0, ram_pages);
+        migration_dirty_pages = ram_pages;
+        mig_throttle_on = false;
+        dirty_rate_high_cnt = 0;
+    }
if (migrate_use_xbzrle()) {
          XBZRLE.cache = cache_init(migrate_xbzrle_cache_size() /
@@ -641,6 +643,11 @@ static int ram_save_setup(QEMUFile *f, void *opaque)
      migration_bitmap_sync();
      qemu_mutex_unlock_iothread();
+ if (migrate_is_localhost()) {
+        ram_save_blocks(f);
+        return 0;
+    }
+
      qemu_put_be64(f, ram_bytes_total() | RAM_SAVE_FLAG_MEM_SIZE);
QTAILQ_FOREACH(block, &ram_list.blocks, next) {




--
Lei




reply via email to

[Prev in Thread] Current Thread [Next in Thread]