[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 00/11] kvm/migration: support KVM_CLEAR_DIRTY_LO
From: |
Peter Xu |
Subject: |
Re: [Qemu-devel] [PATCH 00/11] kvm/migration: support KVM_CLEAR_DIRTY_LOG |
Date: |
Thu, 9 May 2019 10:33:19 +0800 |
User-agent: |
Mutt/1.10.1 (2018-07-13) |
On Wed, May 08, 2019 at 01:55:07PM +0200, Paolo Bonzini wrote:
> On 08/05/19 06:39, Peter Xu wrote:
> >> The disadvantage of this is that you won't clear in the kernel those
> >> dirty bits that come from other sources (e.g. vhost or
> >> address_space_map). This can lead to double-copying of pages.
> >>
> >> Migration already makes a local copy in rb->bmap, and
> >> memory_region_snapshot_and_clear_dirty can also do the clear. Would it
> >> be possible to invoke the clear using rb->bmap instead of the KVMSlot's
> >> new bitmap?
> >
> > Actually that's what I did in the first version before I post the work
> > but I noticed that there seems to have a race condition with the
> > design. The problem is we have multiple copies of the same dirty
> > bitmap from KVM and the race can happen with those multiple users
> > (bitmaps of the users can be a merged version containing KVM and other
> > sources like vhost, address_space_map, etc. but let's just make it
> > simpler to not have them yet).
>
> I see now. And in fact the same double-copying inefficiency happens
> already without this series, so you are improving the situation anyway.
>
> Have you done any kind of benchmarking already?
Not yet. I posted the series for some initial reviews first before
moving on with performance tests.
My plan of the test scenario could be:
- find a guest with relatively large memory (I would guess it is
better to have memory like 64G or even more to make some big
difference)
- run random dirty memory workload upon most of the mem, with dirty
rate X Bps.
- setup the migration bandwidth to Y Bps (Y should be bigger than X
but not that big. One could be X=800M and Y=1G to emulate 10G nic
with a workload that we can still converge with precopy only) and
start precopy migration.
- measure total migration time with CLEAR_LOG on & off. We should
expect the guest to have these with CLEAR_LOG: (1) not hang during
log_sync, and (2) migration should complete faster.
Does above test plan makes sense?
If both the QEMU/KVM changes looks ok in general, I can at least try
this on some smaller guests (I can manage ~10G mem guests with my own
hosts, but I can also try to find some bigger ones).
Thanks,
--
Peter Xu
- [Qemu-devel] [PATCH 05/11] memory: Pass mr into snapshot_and_clear_dirty, (continued)
- [Qemu-devel] [PATCH 05/11] memory: Pass mr into snapshot_and_clear_dirty, Peter Xu, 2019/05/08
- [Qemu-devel] [PATCH 06/11] memory: Introduce memory listener hook log_clear(), Peter Xu, 2019/05/08
- [Qemu-devel] [PATCH 07/11] kvm: Update comments for sync_dirty_bitmap, Peter Xu, 2019/05/08
- [Qemu-devel] [PATCH 08/11] kvm: Persistent per kvmslot dirty bitmap, Peter Xu, 2019/05/08
- [Qemu-devel] [PATCH 09/11] kvm: Introduce slots lock for memory listener, Peter Xu, 2019/05/08
- [Qemu-devel] [PATCH 10/11] kvm: Support KVM_CLEAR_DIRTY_LOG, Peter Xu, 2019/05/08
- [Qemu-devel] [PATCH 11/11] migration: Split log_clear() into smaller chunks, Peter Xu, 2019/05/08
- Re: [Qemu-devel] [PATCH 00/11] kvm/migration: support KVM_CLEAR_DIRTY_LOG, Paolo Bonzini, 2019/05/08