qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC v4 3/3] hw/cxl: Multi-Region CXL Type-3 Devices (Volatile and P


From: Jonathan Cameron
Subject: Re: [RFC v4 3/3] hw/cxl: Multi-Region CXL Type-3 Devices (Volatile and Persistent)
Date: Fri, 20 Jan 2023 10:59:50 +0000

On Thu, 19 Jan 2023 17:13:40 -0500
Gregory Price <gregory.price@memverge.com> wrote:

> On Thu, Jan 19, 2023 at 05:31:12PM +0000, Jonathan Cameron wrote:
> > On Thu, 19 Jan 2023 12:15:45 -0500
> > Gregory Price <gregory.price@memverge.com> wrote:
> >   
> > > Found a bug, not sure how we missed this, probably happed with rebasing
> > > and some fixups. We're presently reporting the volatile region as
> > > non-volatile, 1 line patch.
> > > 
> > > Jonathan do you want a separate patch shipped or would you rather just
> > > apply a fixup to the commit in your current branch?  
> > I'll fix up as I'd only squash the patch in anyway.
> > 
> > If tomorrow is slightly less crazy busy than today I'll push out a new
> > tree with this and the pass through decoders support RFC
> > (I'll post that to the lists as well)
> > 
> > Jonathan
> >   
> 
> Aye aye! 
> 
> One other change to consider: the .EFI_memory_type_attr right now is set
> to RESERVED.  Should this field actually be EFI_MEMORY_SP? Or is there a
> reason for explicitly Reserved?
> 
> 0: EfiConventionalMemory
> 1: EfiConventionalMemory w/ EFI_MEMORY_SP Attribute
> 2: EfiReservedMemoryType
> 
> I remember reading a while back that the intended marking is
> special-purpose rather than reserved, but i'm hitting my wall on
> knowledge.  
> 
> Dan may know, but I couldn't divine the correct setting from the kernel
> (obviously this is EFI level code, so i didn't expect to).

Yes, would be better to report as EfiConventionalMemory + SP
Shouldn't hugely matter from practical point of view though (I haven't
checked) as both mean driver managed and this is more about
policy than anything else.

> 
> 
> 
> One other thing that I am noticing:  When a CFMW is registered, an
> nvdimm_bridge device becomes present in /sys/bus/cxl/devices -
> regardless of whether the type3 device is persistent or volatile.
> 

That's one for Dan.  Key here is I don't think anyone is claiming the
kernel code yet supports 'hot plug / cold plug' of volatile type 3
devices.  I expect a non trivial amount of work to enable that
simply because it hasn't previously been of interest.

> This makes me believe we aren't setting something up correctly in the
> CDAT or something, but other than the below changes everything else
> looks correct.  This could imply a kernel driver bug, but i've been
> validating against real hardware and this behavior is not seen, even
> with functional CXL memory expander devices (which the BIOS obviously
> has a hand in setting up).
> 
> I started validating the DVSECs, but likewise i didn't see any
> indication of error either.
> 
> 
> 
> diff --git a/hw/mem/cxl_type3.c b/hw/mem/cxl_type3.c
> index 919cdf141e..4daa0cf0f6 100644
> --- a/hw/mem/cxl_type3.c
> +++ b/hw/mem/cxl_type3.c
> @@ -132,8 +132,9 @@ static int ct3_build_cdat_entries_for_mr(CDATSubHeader 
> **cdat_table,
>              .length = sizeof(*dsemts),
>          },
>          .DSMAS_handle = dsmad_handle,
> -        /* Reserved - the non volatile from DSMAS matters */
> -        .EFI_memory_type_attr = 2,
> +        /* Reserved if NV - the non volatile from DSMAS matters
> +         * otherwise label this EFI_MEMORY_SP (special purpose) */
> +        .EFI_memory_type_attr = is_pmem ? 2 : 1,
>          .DPA_offset = 0,
>          .DPA_length = int128_get64(mr->size),
>      };
> @@ -187,7 +188,7 @@ static int ct3_build_cdat_table(CDATSubHeader 
> ***cdat_table, void *priv)
>      /* Now fill them in */
>      if (volatile_mr) {
>          rc = ct3_build_cdat_entries_for_mr(table, dsmad_handle++, 
> volatile_mr,
> -                                           true, 0);
> +                                           false, 0);
>          if (rc < 0) {
>              return rc;
>          }




reply via email to

[Prev in Thread] Current Thread [Next in Thread]