qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] ppc/pnv: fix cores per chip for multiple cpus


From: Nikunj A Dadhania
Subject: Re: [Qemu-devel] [PATCH] ppc/pnv: fix cores per chip for multiple cpus
Date: Wed, 20 Sep 2017 09:50:24 +0530

David Gibson <address@hidden> writes:

> On Fri, Sep 15, 2017 at 02:39:16PM +0530, Nikunj A Dadhania wrote:
>> David Gibson <address@hidden> writes:
>> 
>> > On Fri, Sep 15, 2017 at 01:53:15PM +0530, Nikunj A Dadhania wrote:
>> >> David Gibson <address@hidden> writes:
>> >> 
>> >> >> 
>> >> >> I thought, I am doing the same here for PowerNV, number of online cores
>> >> >> is equal to initial online vcpus / threads per core
>> >> >> 
>> >> >>    int boot_cores_nr = smp_cpus / smp_threads;
>> >> >> 
>> >> >> Only difference that I see in PowerNV is that we have multiple chips
>> >> >> (max 2, at the moment)
>> >> >> 
>> >> >>         cores_per_chip = smp_cpus / (smp_threads * pnv->num_chips);
>> >> >
>> >> > This doesn't make sense to me.  Cores per chip should *always* equal
>> >> > smp_cores, you shouldn't need another calculation for it.
>> >> >
>> >> >> And in case user has provided sane smp_cores, we use it.
>> >> >
>> >> > If smp_cores isn't sane, you should simply reject it, not try to fix
>> >> > it.  That's just asking for confusion.
>> >> 
>> >> This is the case where the user does not provide a topology(which is a
>> >> valid scenario), not sure we should reject it. So qemu defaults
>> >> smp_cores/smt_threads to 1. I think it makes sense to over-ride.
>> >
>> > If you can find a way to override it by altering smp_cores when it's
>> > not explicitly specified, then ok.
>> 
>> Should I change the global smp_cores here as well ?
>
> I'm pretty uneasy with that option.

Me too.

> It would take a fair bit of checking to ensure that changing smp_cores
> is safe here. An easier to verify option would be to make the generic
> logic which splits up an unspecified -smp N into cores and sockets
> more flexible, possibly based on machine options for max values.
>
> That might still be more trouble than its worth.

I think the current approach is the simplest and less intrusive, as we
are handling a case where user has not bothered to provide a detailed
topology, the best we can do is create single threaded cores equal to
number of cores.

Regards
Nikunj




reply via email to

[Prev in Thread] Current Thread [Next in Thread]