qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource u


From: Xiao Guangrong
Subject: Re: [Qemu-devel] [PATCH v2 06/11] nvdimm acpi: initialize the resource used by NVDIMM ACPI
Date: Mon, 22 Feb 2016 18:30:03 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.5.1



On 02/19/2016 04:43 PM, Dan Williams wrote:
On Fri, Feb 19, 2016 at 12:08 AM, Michael S. Tsirkin <address@hidden> wrote:
On Thu, Feb 18, 2016 at 11:05:23AM +0100, Igor Mammedov wrote:
On Thu, 18 Feb 2016 12:03:36 +0800
Xiao Guangrong <address@hidden> wrote:

On 02/18/2016 01:26 AM, Michael S. Tsirkin wrote:
On Wed, Feb 17, 2016 at 10:04:18AM +0800, Xiao Guangrong wrote:
As for the rest could that commands go via MMIO that we usually
use for control path?

So both input data and output data go through single MMIO, we need to
introduce a protocol to pass these data, that is complex?

And is any MMIO we can reuse (more complexer?) or we should allocate this
MMIO page (the old question - where to allocated?)?
Maybe you could reuse/extend memhotplug IO interface,
or alternatively as Michael suggested add a vendor specific PCI_Config,
I'd suggest PM device for that (hw/acpi/[piix4.c|ihc9.c])
which I like even better since you won't need to care about which ports
to allocate at all.

Well, if Michael does not object, i will do it in the next version. :)

Sorry, the thread's so long by now that I'm no longer sure what does "it" refer 
to.

Never mind i saw you were busy on other loops.

"It" means the suggestion of Igor that "map each label area right after each
NVDIMM's data memory"
Michael pointed out that putting label right after each NVDIMM
might burn up to 256GB of address space due to DIMM's alignment for 256 NVDIMMs.
However if address for each label is picked with pc_dimm_get_free_addr()
and label's MemoryRegion alignment is default 2MB then all labels
would be allocated close to each other within a single 1GB range.

That would burn only 1GB for 500 labels which is more than possible 256 NVDIMMs.

I thought about it, once we support hotplug, this means that one will
have to pre-declare how much is needed so QEMU can mark the correct
memory reserved, that would be nasty. Maybe we always pre-reserve 1Gbyte.
Okay but next time we need something, do we steal another Gigabyte?
It seems too much, I'll think it over on the weekend.

Really, most other devices manage to get by with 4K chunks just fine, I
don't see why do we are so special and need to steal gigabytes of
physically contigious phy ranges.

What's the driving use case for labels in the guest?  For example,
NVDIMM-N devices are supported by the kernel without labels.

Yes, I see Linux driver supports label-less vNVDIMM that is exact current QEMU
doing. However, label-less is only Linux specific implementation (as it
completely bypasses namespace), other OS vendors (e.g Microsoft) will use label
storage to address their own requirements,or they do not follow namespace spec
at all. Another reason is that label is essential for PBLK support.

BTW, the label support can be dynamically configured and it will be disabled
on default.


I certainly would not want to sacrifice 1GB alignment for a label area.


Yup, me too.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]