[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_d

From: Peter Xu
Subject: Re: [PATCH] migration: Move bitmap_mutex out of migration_bitmap_clear_dirty()
Date: Wed, 7 Jul 2021 12:44:05 -0400

On Wed, Jul 07, 2021 at 08:33:21AM +0000, Wang, Wei W wrote:
> On Wednesday, July 7, 2021 2:00 AM, Peter Xu wrote:
> > On Fri, Jul 02, 2021 at 02:29:41AM +0000, Wang, Wei W wrote:
> > > With that, if free page opt is off, the mutex is skipped, isn't it?
> > 
> > Yes, but when free page is on, it'll check once per page.  As I mentioned I 
> > still
> > don't think it's the right thing to do.
> With free page opt on, if the migration thread waits for lock acquire on a 
> page, it actually means that it is trying to skip the transfer of a page.
> For example, waiting for the lock takes 100ns, then the skip of sending a 
> page saves back 1000ns, then overall we saved 900ns per page (i.e. pay less 
> and earn more).

The overhead we measured are purely for taking the lock, without sleeping.  The
case you mentioned happens very rare, while the cpu cycles to take the lock
(even if it's a cmpxchg) happens constantly for every guest page.

> > 
> > We encountered this problem when migrating a 3tb vm and the mutex spins and
> > eats tons of cpu resources.  It shouldn't happen with/without balloon, imho.
> I think we should compare the overall migration time.

In reality, we've already applied this patch with the 3tb migration test and it
allows us to start migrate the 3tb vm with some light workload, while we can't
do so without this patch.  I don't know whether balloon is enabled or not,
but.. It means, if virtio balloon is enabled, we can't migrate either even if
we make it a conditional lock, becaust the guest is using 2tb+ memory so there
aren't a lot free pages.

> > 
> > Not to mention the hard migration issues are mostly with non-idle guest, in 
> > that
> > case having the balloon in the guest will be disastrous from this pov since 
> > it'll start
> > to take mutex for each page, while balloon would hardly report anything 
> > valid
> > since most guest pages are being used.
> If no pages are reported, migration thread wouldn't wait on the lock then.

Yes I think this is the place I didn't make myself clear.  It's not about
sleeping, it's about the cmpxchg being expensive already when the vm is huge.

> To conclude: to decide whether the per page lock hurts the performance 
> considering that the lock in some sense actually prevents the migration 
> thread from sending free pages which it shouldn't, we need to compare the 
> overall migration time.
> (previous data could be found 
> here:https://patchwork.kernel.org/project/kvm/cover/1535333539-32420-1-git-send-email-wei.w.wang@intel.com/,
>  I think the situation should be the same for either 8GB or 3TB guest, in 
> terms of the overall migration time comparison) 

We can't compare migration time if it can't even converge, isn't it? :) The
mutex is too expensive there so this patch already start to help it converge.

Again, I understand you're worried the patch could make balloon less efficient
for some use cases.  I think we can take the lock less than 50ms, but as I said
it multiple times.. I still don't think it's good to take it per-page; I still
don't believe we need that granularity.  Or please justify why per-page locking
is necessary.


Peter Xu

reply via email to

[Prev in Thread] Current Thread [Next in Thread]