qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 3/4] target/ppc: consolidate CPU device-tree


From: Greg Kurz
Subject: Re: [Qemu-devel] [PATCH v2 3/4] target/ppc: consolidate CPU device-tree id computation in helper
Date: Mon, 22 May 2017 11:09:56 +0200

On Mon, 22 May 2017 12:12:46 +1000
David Gibson <address@hidden> wrote:

> On Mon, May 22, 2017 at 12:04:13PM +1000, David Gibson wrote:
> > On Fri, May 19, 2017 at 12:32:20PM +0200, Greg Kurz wrote:  
> > > For historical reasons, we compute CPU device-tree ids with a non-trivial
> > > logic. This patch consolidate the logic in a single helper to be used
> > > in various places where it is currently open-coded.
> > > 
> > > It is okay to get rid of DIV_ROUND_UP() because we're sure that the number
> > > of threads per core in the guest cannot exceed the number of threads per
> > > core in the host.  
> > 
> > However, your new logic still gives different answers in some cases.
> > In particular when max_cpus is not a multiple of smp_threads.  Which
> > is generally a bad idea, but allowed for older machine types for
> > compatibility.   e.g. smp_threads=4, max_cpus=6 smt=8
> > 
> > Old logic:
> >              DIV_ROUND_UP(6 * 8, 4)
> >            = ⌈48 / 4⌉ = 12
> > 
> > New logic gives: ⌊6 / 4⌋ * 8 + (6 % 4)
> >                = 1 * 8 + 2
> >            = 10
> > 
> > In any case the DIV_ROUND_UP() isn't to handle the case where guest
> > threads-per-core is bigger than host threads-per-core, it's (IIRC) for
> > the case where guest threads-per-core is not a factor of host
> > threads-per-core.  Again, a bad idea, but I think allowed in some old
> > cases.  
> 
> Oh, so, the other more general point here is that I actually want to
> get rid of dt_id from the cpu structure.  It's basically an abuse of
> the cpu stuff to include what's really an spapr concept - dt IDs for
> powernv are based on the PIR and not allocate the same way.
> 

Agreed.

> That said, I'm still ok with a fixed version of this patch as an
> interim step.
> 

Well... I'm not sure anymore I need this patch to fix the migration
breakage.

> > > Signed-off-by: Greg Kurz <address@hidden>
> > > ---
> > >  hw/ppc/spapr.c              |    6 ++----
> > >  target/ppc/cpu.h            |   17 +++++++++++++++++
> > >  target/ppc/translate_init.c |    3 +--
> > >  3 files changed, 20 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > > index 75e298b4c6be..1bb05a9a6b07 100644
> > > --- a/hw/ppc/spapr.c
> > > +++ b/hw/ppc/spapr.c
> > > @@ -981,7 +981,6 @@ static void *spapr_build_fdt(sPAPRMachineState *spapr,
> > >      void *fdt;
> > >      sPAPRPHBState *phb;
> > >      char *buf;
> > > -    int smt = kvmppc_smt_threads();
> > >  
> > >      fdt = g_malloc0(FDT_MAX_SIZE);
> > >      _FDT((fdt_create_empty_tree(fdt, FDT_MAX_SIZE)));
> > > @@ -1021,7 +1020,7 @@ static void *spapr_build_fdt(sPAPRMachineState 
> > > *spapr,
> > >      _FDT(fdt_setprop_cell(fdt, 0, "#size-cells", 2));
> > >  
> > >      /* /interrupt controller */
> > > -    spapr_dt_xics(DIV_ROUND_UP(max_cpus * smt, smp_threads), fdt, 
> > > PHANDLE_XICP);
> > > +    spapr_dt_xics(ppc_cpu_dt_id_from_index(max_cpus), fdt, PHANDLE_XICP);
> > >  
> > >      ret = spapr_populate_memory(spapr, fdt);
> > >      if (ret < 0) {
> > > @@ -1977,7 +1976,6 @@ static void spapr_init_cpus(sPAPRMachineState 
> > > *spapr)
> > >      MachineState *machine = MACHINE(spapr);
> > >      MachineClass *mc = MACHINE_GET_CLASS(machine);
> > >      char *type = spapr_get_cpu_core_type(machine->cpu_model);
> > > -    int smt = kvmppc_smt_threads();
> > >      const CPUArchIdList *possible_cpus;
> > >      int boot_cores_nr = smp_cpus / smp_threads;
> > >      int i;
> > > @@ -2014,7 +2012,7 @@ static void spapr_init_cpus(sPAPRMachineState 
> > > *spapr)
> > >              sPAPRDRConnector *drc =
> > >                  spapr_dr_connector_new(OBJECT(spapr),
> > >                                         SPAPR_DR_CONNECTOR_TYPE_CPU,
> > > -                                       (core_id / smp_threads) * smt);
> > > +                                       
> > > ppc_cpu_dt_id_from_index(core_id));
> > >  
> > >              qemu_register_reset(spapr_drc_reset, drc);
> > >          }
> > > diff --git a/target/ppc/cpu.h b/target/ppc/cpu.h
> > > index 401e10e7dad8..47fe6c64698f 100644
> > > --- a/target/ppc/cpu.h
> > > +++ b/target/ppc/cpu.h
> > > @@ -2529,4 +2529,21 @@ int ppc_get_vcpu_dt_id(PowerPCCPU *cpu);
> > >  PowerPCCPU *ppc_get_vcpu_by_dt_id(int cpu_dt_id);
> > >  
> > >  void ppc_maybe_bswap_register(CPUPPCState *env, uint8_t *mem_buf, int 
> > > len);
> > > +
> > > +#if !defined(CONFIG_USER_ONLY)
> > > +#include "sysemu/cpus.h"
> > > +#include "target/ppc/kvm_ppc.h"
> > > +
> > > +static inline int ppc_cpu_dt_id_from_index(int cpu_index)
> > > +{
> > > +    /* POWER HV support has an historical limitation that different 
> > > threads
> > > +     * on a single core cannot be in different guests at the same time. 
> > > In
> > > +     * order to allow KVM to assign guest threads to host cores 
> > > accordingly,
> > > +     * CPU device tree ids are spaced by the number of threads per host 
> > > cores.
> > > +     */
> > > +    return (cpu_index / smp_threads) * kvmppc_smt_threads()
> > > +        + (cpu_index % smp_threads);
> > > +}
> > > +#endif
> > > +
> > >  #endif /* PPC_CPU_H */
> > > diff --git a/target/ppc/translate_init.c b/target/ppc/translate_init.c
> > > index 56a0ab22cfbe..837a9a496a65 100644
> > > --- a/target/ppc/translate_init.c
> > > +++ b/target/ppc/translate_init.c
> > > @@ -9851,8 +9851,7 @@ static void ppc_cpu_realizefn(DeviceState *dev, 
> > > Error **errp)
> > >      }
> > >  
> > >  #if !defined(CONFIG_USER_ONLY)
> > > -    cpu->cpu_dt_id = (cs->cpu_index / smp_threads) * max_smt
> > > -        + (cs->cpu_index % smp_threads);
> > > +    cpu->cpu_dt_id = ppc_cpu_dt_id_from_index(cs->cpu_index);
> > >  
> > >      if (kvm_enabled() && !kvm_vcpu_id_is_valid(cpu->cpu_dt_id)) {
> > >          error_setg(errp, "Can't create CPU with id %d in KVM", 
> > > cpu->cpu_dt_id);
> > >   
> >   
> 
> 
> 

Attachment: pgpO2pqcOG3_u.pgp
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]