[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC] Set addresses for memory devices [CXL]

From: Ben Widawsky
Subject: [RFC] Set addresses for memory devices [CXL]
Date: Wed, 27 Jan 2021 19:51:46 -0800

Hi list, Igor.

I wanted to get some ideas on how to better handle this. Per the recent
discussion [1], it's become clear that there needs to be more thought put into
how to manage the address space for CXL memory devices. If you see the
discussion on interleave [2] there's a decent diagram for the problem statement.

A CXL topology looks just like a PCIe topology. A CXL memory device is a memory
expander. It's a byte addressable address range with a combination of persistent
and volatile memory. In a CXL capable system, you can effectively think of these
things as more configurable NVDIMMs. The memory devices have an interface that
allows the OS to program the base physical address range it claims called an HDM
(Host Defined Memory) decoder. A larger address range is claimed by a host
bridge (or a combination of host bridges in the interleaved case) which is
platform specific.

Originally, my plan was to create a single memory backend for a "window" and
subregion the devices in there. So for example, if you had two devices under a
hostbridge, each of 256M size, the window would be some fixed GPA of 512M+ size
memory backend, and those memory devices would be a subregion of the
hostbridge's window. I thought this was working in my patch series, but as it
turns out, this doesn't actually work as I intended. `info mtree` looks good,
but `info memory-devices` doesn't.

So let me list the requirements and hopefully get some feedback on the best way
to handle it.
1. A PCIe like device has a persistent memory region (I don't care about
volatile at the moment).
2. The physical address base for the memory region is programmable.
3. Memory accesses will support interleaving across multiple host bridges.

As far as I can tell, there isn't anything that works quite like this today,
and, my attempts so far haven't been correct.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]