qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [Qemu-devel] [PATCH v3 3/3] pc-dimm: factor out address s


From: Igor Mammedov
Subject: Re: [Qemu-ppc] [Qemu-devel] [PATCH v3 3/3] pc-dimm: factor out address space logic into MemoryDevice code
Date: Wed, 25 Apr 2018 15:23:56 +0200

On Wed, 25 Apr 2018 01:45:12 -0400 (EDT)
Pankaj Gupta <address@hidden> wrote:

> >   
> > > >     
> > > >> +    /* we will need a new memory slot for kvm and vhost */
> > > >> +    if (kvm_enabled() && !kvm_has_free_slot(machine)) {
> > > >> +        error_setg(errp, "hypervisor has no free memory slots left");
> > > >> +        return;
> > > >> +    }
> > > >> +    if (!vhost_has_free_slot()) {
> > > >> +        error_setg(errp, "a used vhost backend has no free memory 
> > > >> slots
> > > >> left");
> > > >> +        return;
> > > >> +    }  
> > > > move these checks to pre_plug time
> > > >     
> > > >> +
> > > >> +    memory_region_add_subregion(&hpms->mr, addr - hpms->base, mr);  
> > > > missing vmstate registration?  
> > > 
> > > Missed this one: To be called by the caller. Important because e.g. for
> > > virtio-pmem we don't want this (I assume :) ).  
> > if pmem isn't on shared storage, then We'd probably want to migrate
> > it as well, otherwise target would experience data loss.
> > Anyways, I'd just reat it as normal RAM in migration case  
> 
> Main difference between RAM and pmem it acts like combination of RAM and disk.
> Saying this, in normal use-case size would be 100 GB's - few TB's range. 
> I am not sure we really want to migrate it for non-shared storage use-case.
with non shared storage you'd have to migrate it target host but
with shared storage it might be possible to flush it and use directly
from target host. That probably won't work right out of box and would
need some sort of synchronization between src/dst hosts.

The same applies to nv/pc-dimm as well, as backend file easily could be
on pmem storage as well.

Maybe for now we should migrate everything so it would work in case of
non shared NVDIMM on host. And then later add migration-less capability
to all of them.

> One reason why nvdimm added vmstate info could be: still there would be 
> transient
> writes in memory with fake DAX and there is no way(till now) to flush the 
> guest 
> writes. But with virtio-pmem we can flush such writes before migration and 
> automatically
> at destination host with shared disk we will have updated data.
nvdimm has concept of flush address hint (may be not implemented in qemu yet)
but it can flush. The only reason I'm buying into virtio-mem idea
is that would allow async flush queues which would reduce number
of vmexits.

> 
> 
> Thanks,
> Pankaj  
> 
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]