[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu

On 01/20/2016 07:20 PM, Jan Beulich wrote:
On 20.01.16 at 12:04, <haozhong.zhang@xxxxxxxxx> wrote:
On 01/20/16 01:46, Jan Beulich wrote:
On 20.01.16 at 06:31, <haozhong.zhang@xxxxxxxxx> wrote:
Secondly, the driver implements a convenient block device interface to
let software access areas where NVDIMM devices are mapped. The
existing vNVDIMM implementation in QEMU uses this interface.

As Linux NVDIMM driver has already done above, why do we bother to
reimplement them in Xen?

See above; a possibility is that we may need a split model (block
layer parts on Dom0, "normal memory" parts in the hypervisor.
Iirc the split is being determined by firmware, and hence set in
stone by the time OS (or hypervisor) boot starts.

For the "normal memory" parts, do you mean parts that map the host
NVDIMM device's address space range to the guest? I'm going to
implement that part in hypervisor and expose it as a hypercall so that
it can be used by QEMU.

To answer this I need to have my understanding of the partitioning
being done by firmware confirmed: If that's the case, then "normal"
means the part that doesn't get exposed as a block device (SSD).
In any event there's no correlation to guest exposure here.

Firmware does not manage NVDIMM. All the operations of nvdimm are handled
by OS.

Actually, there are lots of things we should take into account if we move
the NVDIMM management to hypervisor:
a) ACPI NFIT interpretation
   A new ACPI table introduced in ACPI 6.0 is named NFIT which exports the
   base information of NVDIMM devices which includes PMEM info, PBLK
   info, nvdimm device interleave, vendor info, etc. Let me explain it one
   by one.

   PMEM and PBLK are two modes to access NVDIMM devices:
   1) PMEM can be treated as NV-RAM which is directly mapped to CPU's address
      space so that CPU can r/w it directly.
   2) as NVDIMM has huge capability and CPU's address space is limited, NVDIMM
      only offers two windows which are mapped to CPU's address space, the data
      window and access window, so that CPU can use these two windows to access
      the whole NVDIMM device.

   NVDIMM device is interleaved whose info is also exported so that we can
   calculate the address to access the specified NVDIMM device.

   NVDIMM devices from different vendor can have different function so that the
   vendor info is exported by NFIT to make vendor's driver work.

b) ACPI SSDT interpretation
   SSDT offers _DSM method which controls NVDIMM device, such as label 
   health check etc and hotplug support.

c) Resource management
   NVDIMM resource management challenged as:
   1) PMEM is huge and it is little slower access than RAM so it is not suitable
      to manage it as page struct (i think it is not a big problem in Xen
   2) need to partition it to it be used in multiple VMs.
   3) need to support PBLK and partition it in the future.

d) management tools support
   S.M.A.R.T? error detection and recovering?

c) hotplug support

d) third parts drivers
   Vendor drivers need to be ported to xen hypervisor and let it be supported in
   the management tool.

e) ...

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.