qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 1/4] spapr: move NUMA associativity init to machine reset


From: David Gibson
Subject: Re: [PATCH v5 1/4] spapr: move NUMA associativity init to machine reset
Date: Sat, 11 Sep 2021 13:53:32 +1000

On Fri, Sep 10, 2021 at 04:57:14PM -0300, Daniel Henrique Barboza wrote:
65;6402;1c> 
> 
> On 9/7/21 6:23 AM, David Gibson wrote:
> > On Tue, Sep 07, 2021 at 09:10:13AM +0200, Greg Kurz wrote:
> > > On Tue, 7 Sep 2021 10:37:27 +1000
> > > David Gibson <david@gibson.dropbear.id.au> wrote:
> > > 
> > > > On Mon, Sep 06, 2021 at 09:25:24PM -0300, Daniel Henrique Barboza wrote:
> > > > > At this moment we only support one form of NUMA affinity, FORM1. This
> > > > > allows us to init the internal structures during machine_init(), and
> > > > > given that NUMA distances won't change during the guest lifetime we
> > > > > don't need to bother with that again.
> > > > > 
> > > > > We're about to introduce FORM2, a new NUMA affinity mode for pSeries
> > > > > guests. This means that we'll only be certain about the affinity mode
> > > > > being used after client architecture support. This also means that the
> > > > > guest can switch affinity modes in machine reset.
> > > > > 
> > > > > Let's prepare the ground for the FORM2 support by moving the NUMA
> > > > > internal data init from machine_init() to machine_reset(). Change the
> > > > > name to spapr_numa_associativity_reset() to make it clearer that this 
> > > > > is
> > > > > a function that can be called multiple times during the guest 
> > > > > lifecycle.
> > > > > We're also simplifying its current API since this method will be 
> > > > > called
> > > > > during CAS time (do_client_architecture_support()) later on and 
> > > > > there's no
> > > > > MachineState pointer already solved there.
> > > > > 
> > > > > Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> > > > 
> > > > Applied to ppc-for-6.2, thanks.
> > > > 
> > > 
> > > Even if already applied :
> > > 
> > > Reviewed-by: Greg Kurz <groug@kaod.org>
> > 
> > Added, thanks.
> 
> 
> I'm afraid this patch was deprecated by the new patch series I just
> posted.

Ok, I've removed the old patch from ppc-for-6.2.

> 
> 
> Thanks,
> 
> 
> Daniel
> 
> > 
> > > > > ---
> > > > >   hw/ppc/spapr.c              | 6 +++---
> > > > >   hw/ppc/spapr_numa.c         | 4 ++--
> > > > >   include/hw/ppc/spapr_numa.h | 9 +--------
> > > > >   3 files changed, 6 insertions(+), 13 deletions(-)
> > > > > 
> > > > > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > > > > index d39fd4e644..8e1ff6cd10 100644
> > > > > --- a/hw/ppc/spapr.c
> > > > > +++ b/hw/ppc/spapr.c
> > > > > @@ -1621,6 +1621,9 @@ static void spapr_machine_reset(MachineState 
> > > > > *machine)
> > > > >        */
> > > > >       spapr_irq_reset(spapr, &error_fatal);
> > > > > +    /* Reset numa_assoc_array */
> > > > > +    spapr_numa_associativity_reset(spapr);
> > > > > +
> > > > >       /*
> > > > >        * There is no CAS under qtest. Simulate one to please the code 
> > > > > that
> > > > >        * depends on spapr->ov5_cas. This is especially needed to test 
> > > > > device
> > > > > @@ -2808,9 +2811,6 @@ static void spapr_machine_init(MachineState 
> > > > > *machine)
> > > > >       spapr->gpu_numa_id = spapr_numa_initial_nvgpu_numa_id(machine);
> > > > > -    /* Init numa_assoc_array */
> > > > > -    spapr_numa_associativity_init(spapr, machine);
> > > > > -
> > > > >       if ((!kvm_enabled() || kvmppc_has_cap_mmu_radix()) &&
> > > > >           ppc_type_check_compat(machine->cpu_type, 
> > > > > CPU_POWERPC_LOGICAL_3_00, 0,
> > > > >                                 spapr->max_compat_pvr)) {
> > > > > diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> > > > > index 779f18b994..9ee4b479fe 100644
> > > > > --- a/hw/ppc/spapr_numa.c
> > > > > +++ b/hw/ppc/spapr_numa.c
> > > > > @@ -155,10 +155,10 @@ static void 
> > > > > spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
> > > > >   }
> > > > > -void spapr_numa_associativity_init(SpaprMachineState *spapr,
> > > > > -                                   MachineState *machine)
> > > > > +void spapr_numa_associativity_reset(SpaprMachineState *spapr)
> > > > >   {
> > > > >       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> > > > > +    MachineState *machine = MACHINE(spapr);
> > > > >       int nb_numa_nodes = machine->numa_state->num_nodes;
> > > > >       int i, j, max_nodes_with_gpus;
> > > > >       bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
> > > > > diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h
> > > > > index 6f9f02d3de..0e457bba57 100644
> > > > > --- a/include/hw/ppc/spapr_numa.h
> > > > > +++ b/include/hw/ppc/spapr_numa.h
> > > > > @@ -16,14 +16,7 @@
> > > > >   #include "hw/boards.h"
> > > > >   #include "hw/ppc/spapr.h"
> > > > > -/*
> > > > > - * Having both SpaprMachineState and MachineState as arguments
> > > > > - * feels odd, but it will spare a MACHINE() call inside the
> > > > > - * function. spapr_machine_init() is the only caller for it, and
> > > > > - * it has both pointers resolved already.
> > > > > - */
> > > > > -void spapr_numa_associativity_init(SpaprMachineState *spapr,
> > > > > -                                   MachineState *machine);
> > > > 
> > > > Nice additional cleanup to the signature, thanks.
> > > > 
> > > > > +void spapr_numa_associativity_reset(SpaprMachineState *spapr);
> > > > >   void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, 
> > > > > int rtas);
> > > > >   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, 
> > > > > void *fdt,
> > > > >                                          int offset, int nodeid);
> > > > 
> > > 
> > 
> > 
> > 
> 

-- 
David Gibson                    | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
                                | _way_ _around_!
http://www.ozlabs.org/~dgibson

Attachment: signature.asc
Description: PGP signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]