qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 03/17] softmmu: Fix CPUSTATE.nr_cores' calculation


From: Zhao Liu
Subject: Re: [PATCH v3 03/17] softmmu: Fix CPUSTATE.nr_cores' calculation
Date: Mon, 7 Aug 2023 18:00:18 +0800

Hi Xiaoyao,

On Mon, Aug 07, 2023 at 04:43:32PM +0800, Xiaoyao Li wrote:
> Date: Mon, 7 Aug 2023 16:43:32 +0800
> From: Xiaoyao Li <xiaoyao.li@intel.com>
> Subject: Re: [PATCH v3 03/17] softmmu: Fix CPUSTATE.nr_cores' calculation
> 
> On 8/7/2023 3:53 PM, Zhao Liu wrote:
> > > > diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> > > > index 97ad229d8ba3..50613cd04612 100644
> > > > --- a/target/i386/cpu.c
> > > > +++ b/target/i386/cpu.c
> > > > @@ -6011,7 +6011,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t 
> > > > index, uint32_t count,
> > > >        X86CPUTopoInfo topo_info;
> > > >        topo_info.dies_per_pkg = env->nr_dies;
> > > > -    topo_info.cores_per_die = cs->nr_cores;
> > > > +    topo_info.cores_per_die = cs->nr_cores / env->nr_dies;
> > > This and below things make me think that, it looks ugly that @nr_dies is
> > > added separately in struct CPUArchState for i386 because CPUState only has
> > > @nr_cores and nr_threads. Further, for i386, it defines a specific struct
> > > X86CPUTopoInfo to contain topology info when setting up CPUID. To me, 
> > > struct
> > > X86CPUTopoInfo is redundant as struct CpuTopology.
> > > 
> > > maybe we can carry a struct CpuTopology in CPUState, so that we can drop
> > > @nr_threads, @nr_cores in CPUState for all ARCHes, and @nr_dies in struct
> > > CPUArchState for i386. As well, topo_info can be dropped here.
> > Yeah, I agree. We think the same way, as did in [1].
> > 
> > About X86CPUTopoInfo, it's still necessary to keep to help encode
> > APICID.
> 
> typedef struct X86CPUTopoInfo {
>     unsigned dies_per_pkg;
>     unsigned cores_per_die;
>     unsigned threads_per_core;
> } X86CPUTopoInfo;
> 
> /**
>  * CpuTopology:
>  * @cpus: the number of present logical processors on the machine
>  * @sockets: the number of sockets on the machine
>  * @dies: the number of dies in one socket
>  * @clusters: the number of clusters in one die
>  * @cores: the number of cores in one cluster
>  * @threads: the number of threads in one core
>  * @max_cpus: the maximum number of logical processors on the machine
>  */
> typedef struct CpuTopology {
>     unsigned int cpus;
>     unsigned int sockets;
>     unsigned int dies;
>     unsigned int clusters;
>     unsigned int cores;
>     unsigned int threads;
>     unsigned int max_cpus;
> } CpuTopology;
> 
> I think 'struct X86CPUTopoInfo' is just a subset of 'struct CpuTopology'

For smp topology, it's correct.

> 
> IIUC, currently the value of X86CPUTopoInfo::dies_per_pkg should equal with
> CpuTopology::dies, and the same for cores_per_die and threads_per_core.
> 
> So it's OK to keep an copy of 'struct CpuTopology' in CPUState and drop
> 'struct X86CPUTopoInfo'
> 
> > For hybrid topology case, the APICID is likely discontinuous,
> > and the width of each CPU level in APICID depends on the maximum number
> > of elements in this level. So I also proposed to rename it to
> > X86ApicidTopoInfo [2] and count the maximum number of elements in each
> > CPU level.
> 
> Do you mean, for example, for hybrid topology, X86CPUTopoInfo::dies_per_pkg
> != CpuTopology::dies? Or after rename
> X86CPUTopoInfo::max_dies != CpuTopology::dies?

I mean the latter.

A more typical example nowadays is thread level.

X86CPUTopoInfo::max_threads may not euqal to CpuTopology::threads,

since P core has 2 threads per core and E core doesn't support SMT.

The CpuTopology in CPUState should reflect the topology information for
current CPU, so CpuTopology::threads is 2 for P core and
CpuTopology::threads = 1 for E core.

But the width of the SMT level in APICID must be fixed, so that SMT width
should be determined by X86CPUTopoInfo::max_threads. Current hybrid
platforms implement it the same way.

Thanks,
Zhao

> 
> > [2]:https://mail.gnu.org/archive/html/qemu-devel/2023-02/msg03237.html
> > 
> > Thanks,
> > Zhao
> > 
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]