qemu-arm
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-arm] [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clust


From: Luc Michel
Subject: Re: [Qemu-arm] [Qemu-devel] [PATCH v7 01/16] hw/cpu: introduce CPU clusters
Date: Mon, 3 Dec 2018 12:20:58 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.3.1

On 11/30/18 5:52 PM, Peter Maydell wrote:
> On Mon, 26 Nov 2018 at 13:27, Eduardo Habkost <address@hidden> wrote:
>>
>> On Sun, Nov 25, 2018 at 10:27:04PM +0100, Philippe Mathieu-Daudé wrote:
>>> Hi Eduardo,
>>>
>>> On 23/11/18 19:10, Eduardo Habkost wrote:
>>>> If you really want to do this and assign cluster_id
>>>> automatically, please do it on realize, where it won't have
>>>> unexpected side-effects after a simple `qom-list-properties` QMP
>>>> command.
>>>
>>> This looks clean enough to me.
>>> Do you prefer we don't use static auto_increment at all?
>>
>> I would prefer to avoid the static variable if possible, but I
>> won't reject it if the alternatives make the API too complex to
>> use.
> 
> I guess the alternative would be to require the cluster ID
> to be a QOM property on the cluster object. Usage would then
> be something like
>     object_initialize_child(OBJECT(s), "rpu-cluster", &s->rpu_cluster,
>                             sizeof(s->rpu_cluster), TYPE_CPU_CLUSTER,
>                             &error_abort, NULL);
>     object_property_set_int(OBJECT(s), 1, "cluster-id", &error_abort);
>     qdev_init_nofail(DEVICE(&s->rpu_cluster));
> 
> (ie one extra line setting the cluster ID explicitly).
I had such a line in v3 which was removed with the Philippe's
auto-assign suggestion. Discussions seems to converge to the removal of
the auto-assign mechanism though. I can rollback to manual assign for my
next re-roll.
Philippe: is it OK for you?

> 
> SoC objects can probably reasonably assume that they are
> the only SoC in the system, ie that they can just assign
> cluster IDs themselves). If that turns out not to be true
> we can make the SoC take a property to specify a cluster ID
> base or something.
> 
> I guess if we've been bitten by cpu-ID auto-assignment
> in the past, starting out with "user picks the cluster ID"
> would be safer, and it's not too much pain at the callsite.
> Plus it avoids the gdbstub changing its mind about which
> order the cluster "processes" appear in if we refactor the
> board code.
> 
>> In either case, I'd like to ensure the caveats of the cluster_id
>> field are clearly documented: no guarantees about ordering or
>> predictability, making it not appropriate for external
>> interfaces.
> 
> The gdb stub is sort of an external interface, of course...
> 
> Luc: what are the requirements on boards using CPU cluster
> objects? I assume these are both OK:
>  * does not use cluster objects at all
>    (the gdbstub puts all the CPUs in one process?)
Yes, when no clusters are found, a process with PID 1 is created and
used for all CPUs.
>  * all CPUs created are in some CPU cluster
>    (the gdbstub uses one process per CPU cluster)
> but what about
>  * some CPUs are created in a CPU cluster, but some
>    are "loose", not in any cluster at all> ?
> Is that just invalid, or do the "loose" CPUs end up in
> a default cluster (gdbstub process), or do they get
> one cluster each?
Currently this is valid and the "loose" CPUs end up in the first
available process (which will be PID 1 if we are in your first case,
i.e. no clusters in the machine).

> 
> If it's invalid (which doesn't seem like a bad idea),
> is there an assert somewhere so that getting it wrong
> results in the board failing to start (ie a "make check"
> failure) ?
I could add such a check if you think it's a good idea, i.e. assert that
"no cluster exist" or "all CPUs are in a cluster"

But maybe in the end this check should not be done in the gdb stub? For
the same reasons maybe you want to ensure that all CPUs have a well
known address space or whatever information the cluster will provide?

Thanks.

-- 
Luc



reply via email to

[Prev in Thread] Current Thread [Next in Thread]