qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/3] migration: Teach dirtyrate about qemu_target_page_size()


From: Richard Henderson
Subject: Re: [PATCH 1/3] migration: Teach dirtyrate about qemu_target_page_size()
Date: Thu, 11 May 2023 12:07:58 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.10.0

On 5/11/23 10:22, Juan Quintela wrote:
Signed-off-by: Juan Quintela <quintela@redhat.com>
---
  migration/dirtyrate.c | 11 ++++++-----
  1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/migration/dirtyrate.c b/migration/dirtyrate.c
index 180ba38c7a..9aa092738c 100644
--- a/migration/dirtyrate.c
+++ b/migration/dirtyrate.c
@@ -17,6 +17,7 @@
  #include "cpu.h"
  #include "exec/ramblock.h"
  #include "exec/ram_addr.h"
+#include "exec/target_page.h"
  #include "qemu/rcu_queue.h"
  #include "qemu/main-loop.h"
  #include "qapi/qapi-commands-migration.h"
@@ -78,7 +79,7 @@ static int64_t do_calculate_dirtyrate(DirtyPageRecord 
dirty_pages,
      uint64_t increased_dirty_pages =
          dirty_pages.end_pages - dirty_pages.start_pages;
- memory_size_MB = (increased_dirty_pages * TARGET_PAGE_SIZE) >> 20;
+    memory_size_MB = (increased_dirty_pages * qemu_target_page_size()) >> 20;

See the recent cleanups for dirtylimit_dirty_ring_full_time, folding multiply+shift into subtract+shift.

return memory_size_MB * 1000 / calc_time_ms;
  }
@@ -291,8 +292,8 @@ static void update_dirtyrate_stat(struct RamblockDirtyInfo 
*info)
      DirtyStat.page_sampling.total_dirty_samples += info->sample_dirty_count;
      DirtyStat.page_sampling.total_sample_count += info->sample_pages_count;
      /* size of total pages in MB */
-    DirtyStat.page_sampling.total_block_mem_MB += (info->ramblock_pages *
-                                                   TARGET_PAGE_SIZE) >> 20;
+    DirtyStat.page_sampling.total_block_mem_MB +=
+        (info->ramblock_pages * qemu_target_page_size()) >> 20;

And a third copy?
Can we abstract this somewhere?


r~



reply via email to

[Prev in Thread] Current Thread [Next in Thread]