qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v5 05/14] vl: handle "-device dimm"


From: Paolo Bonzini
Subject: Re: [Qemu-devel] [PATCH v5 05/14] vl: handle "-device dimm"
Date: Thu, 27 Jun 2013 08:55:25 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130514 Thunderbird/17.0.6

Il 27/06/2013 07:08, Wanlong Gao ha scritto:
> Do we really need to specify the memory range? I suspect that we can
> follow current design of normal memory in hot-plug memory.

I think we can do both.  I'm afraid that the configuration of the VM
will not be perfectly reproducible without specifying the range, more so
if you allow hotplug.

> Currently,
> we just specify the size of normal memory in each node, and the range
> in normal memory is node by node. Then I think we can just specify
> the memory size of hot-plug in each node, then the hot-plug memory
> range is also node by node, and the whole hot-plug memory block is
> just located after the normal memory block. If so, the option can
> come like:
>     -numa 
> node,nodeid=0,mem=2G,cpus=0-1,mem-hotplug=2G,mem-policy=membind,mem-hostnode=0-1,mem-hotplug-policy=interleave,mem-hotplug-hostnode=1
>     -numa 
> node,nodeid=1,mem=2G,cpus=2-3,mem-hotplug=2G,mem-policy=preferred,mem-hostnode=1,mem-hotplug-policy=membind,mem-hotplug-hostnode=0-1

I think specifying different policies and bindings for normal and
hotplug memory is too much fine-grained.  If you really want that, then
you would need something like

    -numa node,nodeid=0,cpus=0-1 \
    -numa mem,nodeid=0,size=2G,policy=membind,hostnode=0-1 \
    -numa mem,nodeid=0,size=2G,policy=interleave,hostnode=1,populated=no

Hmm... this actually doesn't look too bad, and it is much more
future-proof.  Eduardo, what do you think about it?  Should Wanlong redo
his patches to support this "-numa mem" syntax?  Parsing it should be
easy using the QemuOpts visitor, too.

Paolo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]