[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 0/4] support dirtyrate measurement with dirty bitmap

From: Hyman
Subject: Re: [PATCH 0/4] support dirtyrate measurement with dirty bitmap
Date: Wed, 14 Jul 2021 23:59:20 +0800
User-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0

在 2021/7/14 1:45, Peter Xu 写道:
On Sun, Jul 11, 2021 at 11:27:13PM +0800, Hyman Huang wrote:
IMHO we can directly do the calculation when synchronizing the dirty bits in
below functions:


Maybe we can define a global statistics for that?
uhhh... Do you mean that we can reuse the DIRTY_MEMORY_MIGRATION dirty bits
to stat the new dirty pages number and just define the global var to count
the increased dirty pages during the calculation time?

I think I misguided you.. Sorry :)
never mind, the other version of the implementation is what your said, i'll post later.

cpu_physical_memory_sync_dirty_bitmap() should not really be in the list above,
as it's fetching the bitmap in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION].

If you see the other two functions, they all apply dirty bits upon the same
bitmap (actually ram_list.dirty_memory[*] instead of migration-only).  It's
used by e.g. memory region log_sync() to deliver lower level dirty bits to
upper, e.g., see kvm's log_sync[_global]() and kvm_slot_sync_dirty_pages().

Using cpu_physical_memory_sync_dirty_bitmap() is not a good idea to me (which I
saw you used in your latest version), as it could affect migration.  See its
only caller now at ramblock_sync_dirty_bitmap(): when migration calls it, it'll
start to count less than it should for rs->migration_dirty_pages.

So what I wanted to suggest is we do some general counting in both
cpu_physical_memory_set_dirty_range and cpu_physical_memory_set_dirty_lebitmap.
Then to sync for dirty rate measuring, we use memory_global_dirty_log_sync().
That'll sync all dirty bits e.g. in kernel to ram_list.dirty_memory[*], along
which we do the accounting.

Would that work?
yes, this works.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]