qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscard


From: Peter Xu
Subject: Re: [PATCH v2 0/6] migration/ram: Optimize for virtio-mem via RamDiscardManager
Date: Thu, 29 Jul 2021 16:28:52 -0400

On Thu, Jul 29, 2021 at 10:06:16PM +0200, David Hildenbrand wrote:
> On 29.07.21 22:00, Peter Xu wrote:
> > On Thu, Jul 29, 2021 at 09:39:24PM +0200, David Hildenbrand wrote:
> > > 
> > > > > In the meantime I adjusted the code but it does the clearing under the
> > > > > iothread lock, which should not be what we want ... I'll have a look.
> > > > 
> > > > Thanks; if it takes more changes than expected we can still start from 
> > > > simple,
> > > > IMHO, by taking bql and timely yield it.
> > > > 
> > > > At the meantime, I found two things in ram_init_bitmaps() that I'm not 
> > > > sure we
> > > > need them of not:
> > > > 
> > > >     1. Do we need WITH_RCU_READ_LOCK_GUARD() if with both bql and 
> > > > ramlist lock?
> > > >        (small question)
> > > 
> > > Good question, I'm not sure if we need it.
> > > 
> > > > 
> > > >     2. Do we need migration_bitmap_sync_precopy() even if dirty bmap is 
> > > > all 1's?
> > > >        (bigger question)
> > > 
> > > IIRC, the bitmap sync will fetch the proper dirty bitmap from KVM and set
> > > the proper bits in the clear_bitmap. So once we call
> > > migration_clear_memory_region_dirty_bitmap_range() etc. later we will
> > > actually clear dirty bits.
> > 
> > Good point, however.. then I'm wondering whether we should just init 
> > clear_bmap
> > to all 1's too when init just like dirty bmap. :)
> 
> Yes, but ... I'm not sure if we have to get the dirty bits into
> KVMSlot->dirty_bmap as well in order to clear them.

Yes, so far it's closely bound to kvm's dirty_bmap, so sounds needed indeed (in
kvm_slot_init_dirty_bitmap).

> 
> It could work with "manual_dirty_log_protect". For !manual_dirty_log_protect
> we might have to keep it that way ... which means we might have to expose
> some ugly details up to migration/ram.c .
> Might require some thought :)

We should make sure clear_log() hooks always work, so the memory api should be
able to call the memory region clear log api without knowing whether it's
enabled underneath in either kvm or other future clear_log() hooks.  KVM
currently should be fine as kvm_physical_log_clear() checks manual protect at
the entry, and it returns directly otherwise.  Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]