qemu-s390x
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v1 2/9] s390x: toplogy: adding drawers and books to smp parsi


From: Daniel P . Berrangé
Subject: Re: [PATCH v1 2/9] s390x: toplogy: adding drawers and books to smp parsing
Date: Fri, 16 Jul 2021 10:18:52 +0100
User-agent: Mutt/2.0.7 (2021-05-04)

On Fri, Jul 16, 2021 at 11:10:04AM +0200, Cornelia Huck wrote:
> On Thu, Jul 15 2021, Markus Armbruster <armbru@redhat.com> wrote:
> 
> > Pierre Morel <pmorel@linux.ibm.com> writes:
> >
> >> On 7/15/21 8:16 AM, Markus Armbruster wrote:
> >>> Pierre Morel <pmorel@linux.ibm.com> writes:
> >>> 
> >>>> Drawers and Books are levels 4 and 3 of the S390 CPU
> >>>> topology.
> >>>> We allow the user to define these levels and we will
> >>>> store the values inside the S390CcwMachineState.
> >>> 
> >>> Double-checking: are these members specific to S390?
> >>
> >> Yes AFAIK
> >
> > Makes me wonder whether they should be conditional on TARGET_S390X.
> >
> > What happens when you specify them for another target?  Silently
> > ignored, or error?
> 
> I'm wondering whether we should include them in the base machine state
> and treat them as we treat 'dies' (i.e. the standard parser errors out
> if they are set, and only the s390x parser supports them.)

To repeat what i just wrote in my reply to patch 1, I think we ought to
think  about a different approach to handling the usage constraints,
which doesn't require full re-implementation of the smp_parse method
each time.  There should be a way for each target to report topology
constraints, such the the single smp_parse method can do the right
thing, especially wrt error reporting for unsupported values.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|




reply via email to

[Prev in Thread] Current Thread [Next in Thread]