[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-ppc] [RFC for-2.13 0/7] spapr: Clean up pagesize handling

From: Andrea Bolognani
Subject: Re: [Qemu-ppc] [RFC for-2.13 0/7] spapr: Clean up pagesize handling
Date: Thu, 19 Apr 2018 17:30:04 +0200

On Thu, 2018-04-19 at 16:29 +1000, David Gibson wrote:
> Currently the "pseries" machine type will (usually) advertise
> different pagesizes to the guest when running under KVM and TCG, which
> is not how things are supposed to work.
> This comes from poor handling of hardware limitations which mean that
> under KVM HV the guest is unable to use pagesizes larger than those
> backing the guest's RAM on the host side.
> The new scheme turns things around by having an explicit machine
> parameter controlling the largest page size that the guest is allowed
> to use.  This limitation applies regardless of accelerator.  When
> we're running on KVM HV we ensure that our backing pages are adequate
> to supply the requested guest page sizes, rather than adjusting the
> guest page sizes based on what KVM can supply.
> This means that in order to use hugepages in a PAPR guest it's
> necessary to add a "cap-hpt-mps=24" machine parameter as well as
> setting the mem-path correctly.  This is a bit more work on the user
> and/or management side, but results in consistent behaviour so I think
> it's worth it.

libvirt guests already need to explicitly opt-in to hugepages, so
adding this new option automagically based on that shouldn't be too

A couple of questions:

  * I see the option accepts values 12, 16, 24 and 34, with 16
    being the default. I guess 34 corresponds to 1 GiB hugepages?
    Also, in what scenario would 12 be used?

  * The name of the property suggests this setting is only relevant
    for HPT guests. libvirt doesn't really have the notion of HPT
    and RPT, and I'm not really itching to introduce it. Can we
    safely use this option for all guests, even RPT ones?


Andrea Bolognani / Red Hat / Virtualization

reply via email to

[Prev in Thread] Current Thread [Next in Thread]