[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu





On 01/21/2016 01:07 AM, Jan Beulich wrote:
On 20.01.16 at 16:29, <guangrong.xiao@xxxxxxxxxxxxxxx> wrote:
On 01/20/2016 07:20 PM, Jan Beulich wrote:
To answer this I need to have my understanding of the partitioning
being done by firmware confirmed: If that's the case, then "normal"
means the part that doesn't get exposed as a block device (SSD).
In any event there's no correlation to guest exposure here.

Firmware does not manage NVDIMM. All the operations of nvdimm are handled
by OS.

Actually, there are lots of things we should take into account if we move
the NVDIMM management to hypervisor:
a) ACPI NFIT interpretation
     A new ACPI table introduced in ACPI 6.0 is named NFIT which exports the
     base information of NVDIMM devices which includes PMEM info, PBLK
     info, nvdimm device interleave, vendor info, etc. Let me explain it one
     by one.

     PMEM and PBLK are two modes to access NVDIMM devices:
     1) PMEM can be treated as NV-RAM which is directly mapped to CPU's address
        space so that CPU can r/w it directly.
     2) as NVDIMM has huge capability and CPU's address space is limited, NVDIMM
        only offers two windows which are mapped to CPU's address space, the 
data
        window and access window, so that CPU can use these two windows to 
access
        the whole NVDIMM device.

You fail to mention PBLK. The question above really was about what

The 2) is PBLK.

entity controls which of the two modes get used (and perhaps for
which parts of the overall NVDIMM).

So i think the "normal" you mentioned is about PMEM. :)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.