qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 05/14] vl: handle "-device dimm"


From: Vasilis Liaskovitis
Subject: Re: [Qemu-devel] [PATCH v5 05/14] vl: handle "-device dimm"
Date: Mon, 15 Jul 2013 19:05:51 +0200
User-agent: Mutt/1.5.21 (2010-09-15)

Hi,

On Thu, Jun 27, 2013 at 08:55:25AM +0200, Paolo Bonzini wrote:
> Il 27/06/2013 07:08, Wanlong Gao ha scritto:
> > Do we really need to specify the memory range? I suspect that we can
> > follow current design of normal memory in hot-plug memory.
> 
> I think we can do both.  I'm afraid that the configuration of the VM
> will not be perfectly reproducible without specifying the range, more so
> if you allow hotplug.
> 
> > Currently,
> > we just specify the size of normal memory in each node, and the range
> > in normal memory is node by node. Then I think we can just specify
> > the memory size of hot-plug in each node, then the hot-plug memory
> > range is also node by node, and the whole hot-plug memory block is
> > just located after the normal memory block. If so, the option can
> > come like:
> >     -numa 
> > node,nodeid=0,mem=2G,cpus=0-1,mem-hotplug=2G,mem-policy=membind,mem-hostnode=0-1,mem-hotplug-policy=interleave,mem-hotplug-hostnode=1
> >     -numa 
> > node,nodeid=1,mem=2G,cpus=2-3,mem-hotplug=2G,mem-policy=preferred,mem-hostnode=1,mem-hotplug-policy=membind,mem-hotplug-hostnode=0-1
> 
> I think specifying different policies and bindings for normal and
> hotplug memory is too much fine-grained.  If you really want that, then
> you would need something like
> 
>     -numa node,nodeid=0,cpus=0-1 \
>     -numa mem,nodeid=0,size=2G,policy=membind,hostnode=0-1 \
>     -numa mem,nodeid=0,size=2G,policy=interleave,hostnode=1,populated=no
> 
> Hmm... this actually doesn't look too bad, and it is much more
> future-proof.  Eduardo, what do you think about it?  Should Wanlong redo
> his patches to support this "-numa mem" syntax?  Parsing it should be
> easy using the QemuOpts visitor, too.

from what i understand, we are currently favoring this numa option? (I saw it
mentioned in Gao's numa patchset series as well)

There is still the question of "how many hotpluggable dimm devices does this
memory range describe?". With the dimm device that was clearly defined, but not
so with this option. Do we choose a default granularity e.g. 1 GB?

Also, as you mentioned, without specifying the memory range, the VM
configuration may be ambiguous. Currently, the VM memory map depends on the
order of dimms defined on the command line. So:

"-device dimm,id=dimm0,size=1G,node=0 -device dimm,id=dimm1,size=2G,node=0"
and
"-device dimm,id=dimm1,size=2G,node=0 -device dimm,id=dimm1,size=1G,node=0"

assign different memory ranges to the dimms.

On the other hand, iirc memory ranges were discussed with previous maintainers
but was rejected: The user/management library may not want to know or simply
does not know architectural details of the guest hardware. What happens if
the user specifies memory on the PCI-hole? Do we bail out or adjust their
arguments? Adjusting ranges might open another can of worms.

In any case, it would be good to get a final consensus on this.

thanks,

- Vasilis



reply via email to

[Prev in Thread] Current Thread [Next in Thread]