|
From: | David Hildenbrand |
Subject: | Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty() |
Date: | Thu, 1 Jul 2021 16:21:38 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 |
On 01.07.21 14:51, Peter Xu wrote:
On Thu, Jul 01, 2021 at 04:42:38AM +0000, Wang, Wei W wrote:On Thursday, July 1, 2021 4:08 AM, Peter Xu wrote:Taking the mutex every time for each dirty bit to clear is too slow, especially we'll take/release even if the dirty bit is cleared. So far it's only used to sync with special cases with qemu_guest_free_page_hint() against migration thread, nothing really that serious yet. Let's move the lock to be upper. There're two callers of migration_bitmap_clear_dirty(). For migration, move it into ram_save_iterate(). With the help of MAX_WAIT logic, we'll only run ram_save_iterate() for no more than 50ms-ish time, so taking the lock once there at the entry. It also means any call sites to qemu_guest_free_page_hint() can be delayed; but it should be very rare, only during migration, and I don't see a problem with it. For COLO, move it up to colo_flush_ram_cache(). I think COLO forgot to take that lock even when calling ramblock_sync_dirty_bitmap(), where another example is migration_bitmap_sync() who took it right. So let the mutex cover both the ramblock_sync_dirty_bitmap() and migration_bitmap_clear_dirty() calls. It's even possible to drop the lock so we use atomic operations upon rb->bmap and the variable migration_dirty_pages. I didn't do it just to still be safe, also not predictable whether the frequent atomic ops could bring overhead too e.g. on huge vms when it happens very often. When that really comes, we can keep a local counter and periodically call atomic ops. Keep it simple for now.If free page opt is enabled, 50ms waiting time might be too long for handling just one hint (via qemu_guest_free_page_hint)? How about making the lock conditionally? e.g. #define QEMU_LOCK_GUARD_COND (lock, cond) { if (cond) QEMU_LOCK_GUARD(lock); } Then in migration_bitmap_clear_dirty: QEMU_LOCK_GUARD_COND(&rs->bitmap_mutex, rs->fpo_enabled);Yeah that's indeed some kind of comment I'd like to get from either you or David when I add the cc list.. :) I was curious how that would affect the guest when the free page hint helper can stuck for a while. Per my understanding it's fully async as the blocked thread here is asynchronously with the guest since both virtio-balloon and virtio-mem are fully async. If so, would it really affect the guest a lot? Is it still tolerable if it only happens during migration?
For virtio-mem, we call qemu_guest_free_page_hint() synchronously from the migration thread, directly via the precopy notifier. I recently sent patches that stop using qemu_guest_free_page_hint() from virtio-mem code. Until then, virtio-mem shouldn't care too much about that change here I guess, as it doesn't interact with guest reqests.
https://lkml.kernel.org/r/20210616162940.28630-1-david@redhat.com For virtio-balloon, it's called via the (asynchronous) iothread.
Taking that mutex for each dirty bit is still an overkill to me, irrelevant of whether it's "conditional" or not. If I'm the cloud admin, I would more prefer migration finishes earlier, imho, rather than freeing some more pages on the host (after migration all pages will be gone!). If it still blocks the guest in some unhealthy way I still prefer to take the lock here, however maybe make it shorter than 50ms.
Spoiler alert: the introduction of clean bitmaps partially broke free page hinting already (as clearing happens deferred -- and might never happen if we don't migrate *any* page within a clean bitmap chunk, so pages actually remain dirty ...). "broke" here means that pages still get migrated even though they were reported by the guest. We'd actually not want to use clean bmaps with free page hinting ... long story short, free page hinting is a very fragile beast already and some of the hints are basically ignored and pure overhead ...
-- Thanks, David / dhildenb
[Prev in Thread] | Current Thread | [Next in Thread] |