[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: cxl nvdimm Potential probe ordering issues.
From: |
Jonathan Cameron |
Subject: |
Re: cxl nvdimm Potential probe ordering issues. |
Date: |
Fri, 20 Jan 2023 10:47:09 +0000 |
On Thu, 19 Jan 2023 23:53:53 -0500
Gregory Price <gregory.price@memverge.com> wrote:
> On Thu, Jan 19, 2023 at 03:04:49PM +0000, Jonathan Cameron wrote:
> > Gregory, would you mind checking if
> > cxl_nvb is NULL here...
> > https://elixir.bootlin.com/linux/v6.2-rc4/source/drivers/cxl/pmem.c#L67
> > (printk before it is used should work).
> >
> > Might also be worth checking cxl_nvd and cxl_ds
> > but my guess is cxl_nvb is our problem (it is when I deliberate change
> > the load order).
> >
> > Jonathan
> >
>
> This is exactly the issue. cxl_nvb is null, the rest appear fine.
>
> Also, note, that weirdly the non-volatile bridge shows up when launching
> this in volatile mode, but no stack trace appears.
>
> ¯\_(ツ)_/¯
>
> After spending way too much time tracing through the current cxl driver
> code, i have only really determined that
>
> 1) The code is very pmem oriented, and it's unclear to me how the driver
> as-is differentiates a persistent device from a volatile device. That
> code path still completely escapes me. The only differentiating code
> i see is in the memdev probe path that creates mem#/pmem and mem#/ram
Absolutely on pmem. Target for kernel side of things was always pmem
first. Volatile has been on roadmap / todo list for a few kernel cycles
but I haven't seen any code yet.
>
> 2) The code successfully manages probe, enable, and mount a REAL device
> - cxl memdev appears (/sys/bus/cxl/devices/mem0)
> - a dax device appears (/sys/bus/dax/devices/)
> This happens at boot, which I assume must be bios related
> - The memory *does not* auto-online, instead the dax device can be
> onlined as system-ram *manually* via ndctl and friends
Interesting. Just curious, is the host a CXL 1.1 host or a CXL 2.0 host?
>
> 3) The code creates an nvdimm_bridge IFF a CFMW is defined - regardless
> of the type-3 device configuration (pmem-only or vmem-only)
>
> # CFMW defined
> [root@fedora ~]# ls /sys/bus/cxl/devices/
> decoder0.0 decoder2.0 mem0 port1
> decoder1.0 endpoint2 nvdimm-bridge0 root0
>
> # CFMW not defined
> [root@fedora ~]# ls /sys/bus/cxl/devices/
> decoder1.0 decoder2.0 endpoint2 mem0 port1 root0
That should be harmless and may be needed to tie everything
through to DAX.
>
> 4) As you can see above, multiple decoders are registered. I'm not sure
> if that's correct or not, but it does seem odd given there's only one
> cxl type-3 device. Odd that decoder0.0 shows up when CFMW is there,
> but not when it isn't.
>
> Note: All these tests have two root ports:
> -device pxb-cxl,id=cxl.0,bus=pcie.0,bus_nr=52 \
> -device cxl-rp,id=rp0,bus=cxl.0,chassis=0,port=0,slot=0 \
> -device cxl-rp,id=rp1,bus=cxl.0,chassis=0,port=1,slot=1 \
IIRC
decoder0.0 represents the fixed routing in the host as defined
by the CFMWS - not an actual programmable decoder.
decoder1.0 is the routing in the host bridge - may be pass through
decoder if there is only one root port.
decoder2.0 is the one is the endpoint itself.
>
>
> Don't know why I haven't thought of this until now, but is the CFMW code
> reporting something odd about what's behind it? Is it assuming the
> devices are pmem?
It reports the ability to support pmem or support volatile or support both.
Currently
https://elixir.bootlin.com/qemu/latest/source/hw/acpi/cxl.c#L107
qemu reports that all CFMWS windows support everything except
"Fixed Device Configuration (Bit[4])" which would tell the OS not
to move devices that are already programmed out of this window
and doesn't really make sense for QEMU to ever set.
That is we support all of:
Device Coherent (type 2 and back invalidate flows on type 3, though we
aren't emulating the back invalidate stuff yet on the EP)
Host only coherent. [thinking about it we should probably not
support both this and device coherent as they would be mutually
incompatible on a real host]
Volatile
Persistent
Jonathan
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, (continued)
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Lukas Wunner, 2023/01/13
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Gregory Price, 2023/01/13
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Gregory Price, 2023/01/18
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Gregory Price, 2023/01/18
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Jonathan Cameron, 2023/01/19
- cxl nvdimm Potential probe ordering issues., Jonathan Cameron, 2023/01/19
- Re: cxl nvdimm Potential probe ordering issues., Jonathan Cameron, 2023/01/19
- Re: cxl nvdimm Potential probe ordering issues., Gregory Price, 2023/01/20
- Re: cxl nvdimm Potential probe ordering issues., Dan Williams, 2023/01/20
- Re: cxl nvdimm Potential probe ordering issues., Gregory Price, 2023/01/19
- Re: cxl nvdimm Potential probe ordering issues.,
Jonathan Cameron <=
- Re: cxl nvdimm Potential probe ordering issues., Dan Williams, 2023/01/20
- Re: cxl nvdimm Potential probe ordering issues., Gregory Price, 2023/01/20
- Re: cxl nvdimm Potential probe ordering issues., Dan Williams, 2023/01/20
- Re: cxl nvdimm Potential probe ordering issues., Jonathan Cameron, 2023/01/23
- Re: cxl nvdimm Potential probe ordering issues., Gregory Price, 2023/01/23
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Jonathan Cameron, 2023/01/19
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Michael S. Tsirkin, 2023/01/19
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Jonathan Cameron, 2023/01/19
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Gregory Price, 2023/01/19
- Re: [PATCH 0/8] hw/cxl: CXL emulation cleanups and minor fixes for upstream, Gregory Price, 2023/01/19