qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC PATCH v2 2/2] spapr: Memory hot-unplug support


From: Thomas Huth
Subject: Re: [Qemu-devel] [RFC PATCH v2 2/2] spapr: Memory hot-unplug support
Date: Fri, 29 Apr 2016 10:22:03 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.7.0

On 29.04.2016 08:59, Bharata B Rao wrote:
> On Fri, Apr 29, 2016 at 08:45:37AM +0200, Thomas Huth wrote:
>> On 29.04.2016 05:24, David Gibson wrote:
>>> On Tue, Apr 26, 2016 at 04:03:37PM -0500, Michael Roth wrote:
>> ...
>>>> In the case of pseries, the DIMM abstraction isn't really exposed to
>>>> the guest, but rather the memory blocks we use to make the backing
>>>> memdev memory available to the guest. During unplug, the guest
>>>> completely releases these blocks back to QEMU, and if it can only
>>>> release a subset of what's requested it does not attempt to recover.
>>>> We can potentially change that behavior on the guest side, since
>>>> partially-freed DIMMs aren't currently useful on the host-side...
>>>>
>>>> But, in the case of pseries, I wonder if it makes sense to maybe go
>>>> ahead and MADV_DONTNEED the ranges backing these released blocks so the
>>>> host can at least partially reclaim the memory from a partially
>>>> unplugged DIMM?
>>>
>>> Urgh.. I can see the benefit, but I'm a bit uneasy about making the
>>> DIMM semantics different in this way on Power.
>>>
>>> I'm shoehorning the PAPR DR memory mechanism into the qemu DIMM model
>>> was a good idea after all.
>>
>> Ignorant question (sorry, I really don't have much experience yet here):
>> Could we maybe align the size of the LMBs with the size of the DIMMs?
>> E.g. make the LMBs bigger or the DIMMs smaller, so that they match?
> 
> Should work, but the question is what should be the right size so that
> we have good granularity of hotplug but also not run out of mem slots
> thereby limiting us on the maxmem. I remember you changed the memslots
> to 512 in KVM, but we are yet to move up from 32 in QEMU for sPAPR though.

Half of the slots should be "reserved" for PCI and other stuff, so we
could use 256 for memory - that way we would also on the same level as
x86 which also uses 256 memslots here, as far as I know.

Anyway, couldn't we simply calculate the SPAPR_MEMORY_BLOCK_SIZE
dynamically, according to the maxmem and slot values that the user
specified? So that SPAPR_MEMORY_BLOCK_SIZE simply would match the DIMM
size? ... or is there some constraint that I've missed so that
SPAPR_MEMORY_BLOCK_SIZE has to be a compile-time #defined value?

 Thomas




reply via email to

[Prev in Thread] Current Thread [Next in Thread]