qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 21/27] pc: add memory hotplug 440fx machine


From: Igor Mammedov
Subject: Re: [Qemu-devel] [PATCH 21/27] pc: add memory hotplug 440fx machine
Date: Tue, 26 Nov 2013 21:26:09 +0100

On Mon, 25 Nov 2013 18:00:56 +0100
Andreas Färber <address@hidden> wrote:

> Am 25.11.2013 11:41, schrieb Igor Mammedov:
> > On Thu, 21 Nov 2013 17:09:27 +0100
> > Andreas Färber <address@hidden> wrote:
> > 
> >> Am 21.11.2013 15:34, schrieb Igor Mammedov:
> >>> On Thu, 21 Nov 2013 15:13:12 +0100
> >>> Andreas Färber <address@hidden> wrote:
> >>>> Am 21.11.2013 06:48, schrieb Li Guang:
> >>>>> Why not give the memory that not be hot-added a chance to be placed on
> >>>>> one memory slot?
> >>>>
> >>>> Seconded, I believe I requested that on the previous version already.
> >>> Because current initial memory allocation is a mess and this already
> >>> large series would become even more large and intrusive, so far series
> >>> it relatively not intrusive and self contained.
> >>>
> >>> I believe re-factoring of initial memory to use Dimm devices should be
> >>> done later on top of infrastructure this series provides.
> >>
> >> My problem with that is that that would be a semantically incompatible
> >> modeling change. With your series we might have /machine/dimm.0/child[0]
> >> be the first hot-plugged memory and once such a refactoring is done,
> >> child[0] silently becomes -m and child[1] is the hot-added one.
> > 
> > I think there won't be silent change in child[0], since most likely
> > initial RAM would require additional DimmBus (maybe even 2)
> > for it's devices.
> > 
> > But anyway, why would this be an issue?
> > 
> >> So if we know that we want/need to change the infrastructure, why not
> >> add a single patch (?) to allocate one additional object on the bus now?
> >> Unless we actually write the code, we won't know whether there are some
> >> incorrect hot-plug assumptions in the dimm code.
> > It wouldn't be a single simple patch for PC, I'm afraid.
> > I don't see point in adding dummy DIMM device for initial memory and then
> > do re-aliasing of its memory region in GPA as it's done in current code.
> > 
> > As I see it (taking in account Marcolo's/Paolo's alignment path), current
> > single MR for initial RAM should be split in 1-4 separate MRs depending on
> > initial RAM amount and alignment requirements between HPA/GPA addresses.
> > 
> > That would probably introduce additional, non hotlugable DimmBus-es (1-2)
> > for low and high memory, which would be incompatible with old machine types
> > devices and GPA layout, so why care about what
> > /machine/dimm.0/child[0] would be in new machine version?
> 
> I feel we're talking about two very different things here.
> 
> What I am talking about is the user experience. A mainboard has 4 or
> maybe 8 DIMM slots where the user can plug in greenish memory bars.
> That's what I would like to see implemented in QEMU because that's
> understandable without reading code and ACPI specs.
> 
> What you seem to be talking about by contrast is your DimmBus
> implementation and its limitations/assumptions. You can easily use
> dev->hotplugged to distinguish between initial and hot-plugged devices
> as done elsewhere, including PCI and ICC bus, no?
Yes, That what user would be interested in when doing hot-unplug. I'll add
properties to DimmDevice so user could see if it's "hotplugable" &
"hotplugged". 

> 
> In short, what I am fighting against is having a machine with 4 slots:
> 
> slot[0] = 42
> slot[1] = 0
> slot[2] = 0
> slot[3] = 0
> 
> meaning 42 + implicit -m now, later getting fixed to explicit:
> 
> slot[0] = -m
> slot[1] = 42
> slot[2] = 0
> slot[3] = 0
> 
> Whether -m maps to one or more slots can easily be scaled in the
> example, I had previously asked whether there were upper limits per slot
> but believe that got denied from an ACPI perspective; my point is the
> slot offset and the inconsistent sum exposed via QOM/QMP.
such change would be machine incompatible, so why the slot offset would be
important? Depending on initial memory size, slot offset would change.
Depending on stable offset to do something would be a just wrong use of
interface.

I see issue with a sum exposed via QOM/QMP whether it's links or bus based
implementation, but it looks like an additional feature not related to
memory hotplug:
 "let me count all present memory"
this series doesn't provide it, it only provides
 "current hotplug memory enumeration"


> 
> On your ICC bus we had the initial -smp CPUs alongside hot-plugged CPUs
> right from the start.
As Michael said 1.8 is not in freeze yet, so if there will be time I'll
try to convert initial memory to DIMMs as well for the sake of
cleaning up mess it's now and not producing yet another migration
incompatible machine.
But it's not trivial and not directly related to memory hotplug.
Doing dummy conversion would help SUM case from above but it will make
current code even messier. So I'd rather do it incrementally cleaning
it up in process vs making it messier.
 
> 
> I can't think of a reason why there would be multiple DimmBuses in a
> single machine from a user's point of view.
> Different implementations for different memory controllers / machines
> make perfect sense of course. But even from a developer's point of view
> multiple buses don't make that much sense either if we keep
> http://wiki.qemu.org/Features/QOM#TODO in mind - devices exposing
> multiple buses need to be split up and in the end we just have named
> link<> properties on some container object as sketched in the example
> above - question then becomes should we have multiple containers, and I
> think the answer for a PC will be no.
in pc we have to have container memory regions to contain DIMMs since the
split below / above 4Gb memory organization. One way would be to replace
current initial Memory region with non hotplugable bus that will hold initial
memory DIMMs.
In case when buses are to be converted to links<> with all hotplug
machinery around ready, it could be reorganized to 1 container with 2 MR
containers.

> Embedded systems with a mix of small on-chip SRAM and on-board SDRAM may
> be a different beast to model, but well beyond the scope of this series
> anyway, which IIUC doesn't expose any DimmBus outside of the two PCs.
> 
> Also, once memory has been hot-plugged and the machine gets rebooted,
> shouldn't that be the same to BIOS/ACPI as if the memory was cold-plugged?
Guest sees earlier hotplugged memory after reboot during enumeration of ACPI
memory device objects. Windows & Linux work with it just fine (the only
difference is that Linux doesn't online them automaticaly, it's up to udev
to deal with it).

I also have TODO item to evaluate if it's acceptable to add them and
reservation to E820 table so that guest could see them even before ACPI is
processed. 

> 
> Regards,
> Andreas
> 
> -- 
> SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
> GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg


-- 
Regards,
  Igor



reply via email to

[Prev in Thread] Current Thread [Next in Thread]