[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu



On Wed, Jan 20, 2016 at 07:04:49PM +0800, Haozhong Zhang wrote:
> On 01/20/16 01:46, Jan Beulich wrote:
> > >>> On 20.01.16 at 06:31, <haozhong.zhang@xxxxxxxxx> wrote:
> > > The primary reason of current solution is to reuse existing NVDIMM
> > > driver in Linux kernel.
> >
> 
> CC'ing QEMU vNVDIMM maintainer: Xiao Guangrong
> 
> > Re-using code in the Dom0 kernel has benefits and drawbacks, and
> > in any event needs to depend on proper layering to remain in place.
> > A benefit is less code duplication between Xen and Linux; along the
> > same lines a drawback is code duplication between various Dom0
> > OS variants.
> >
> 
> Not clear about other Dom0 OS. But for Linux, it already has a NVDIMM
> driver since 4.2.
> 
> > > One responsibility of this driver is to discover NVDIMM devices and
> > > their parameters (e.g. which portion of an NVDIMM device can be mapped
> > > into the system address space and which address it is mapped to) by
> > > parsing ACPI NFIT tables. Looking at the NFIT spec in Sec 5.2.25 of
> > > ACPI Specification v6 and the actual code in Linux kernel
> > > (drivers/acpi/nfit.*), it's not a trivial task.
> > 
> > To answer one of Kevin's questions: The NFIT table doesn't appear
> > to require the ACPI interpreter. They seem more like SRAT and SLIT.
> 
> Sorry, I made a mistake in another reply. NFIT does not contain
> anything requiring ACPI interpreter. But there are some _DSM methods
> for NVDIMM in SSDT, which needs ACPI interpreter.

Right, but those are for health checks and such. Not needed for boot-time
discovery of the ranges in memory of the NVDIMM.
> 
> > Also you failed to answer Kevin's question regarding E820 entries: I
> > think NVDIMM (or at least parts thereof) get represented in E820 (or
> > the EFI memory map), and if that's the case this would be a very
> > strong hint towards management needing to be in the hypervisor.
> >
> 
> Legacy NVDIMM devices may use E820 entries or other ad-hoc ways to
> announce their locations, but newer ones that follow ACPI v6 spec do
> not need E820 any more and only need ACPI NFIT (i.e. firmware may not
> build E820 entries for them).

I am missing something here.

Linux pvops uses an hypercall to construct its E820 (XENMEM_machine_memory_map)
see arch/x86/xen/setup.c:xen_memory_setup.

That hypercall gets an filtered E820 from the hypervisor. And the
hypervisor gets the E820 from multiboot2 - which gets it from grub2.

With the 'legacy NVDIMM' using E820_NVDIMM (type 12? 13) - they don't
show up in multiboot2 - which means Xen will ignore them (not sure
if changes them to E820_RSRV or just leaves them alone).

Anyhow for the /dev/pmem0 driver in Linux to construct an block
device on the E820_NVDIMM - it MUST have the E820 entry - but we don't
construct that.

I would think that one of the patches would be for the hypervisor
to recognize the E820_NVDIMM and associate that area with p2m_mmio
(so that the xc_memory_mapping hypercall would work on the MFNs)?

But you also mention ACPI v6 defining them an using ACPI NFIT - 
so that would be treating said system address extracted from the
ACPI NFIT just as an MMIO (except it being WB instead of UC).

Either way - Xen hypervisor should also parse the ACPI NFIT so
that it can mark that range as p2m_mmio (or does it do that by
default for any non-E820 ranges?). Does it actually need to
do that? Or is that optional?

I hope the design document will explain a bit of this.

> 
> The current linux kernel can handle both legacy and new NVDIMM devices
> and provide the same block device interface for them.

OK, so Xen would need to do that as well - so that the Linux kernel
can utilize it.
> 
> > > Secondly, the driver implements a convenient block device interface to
> > > let software access areas where NVDIMM devices are mapped. The
> > > existing vNVDIMM implementation in QEMU uses this interface.
> > > 
> > > As Linux NVDIMM driver has already done above, why do we bother to
> > > reimplement them in Xen?
> > 
> > See above; a possibility is that we may need a split model (block
> > layer parts on Dom0, "normal memory" parts in the hypervisor.
> > Iirc the split is being determined by firmware, and hence set in
> > stone by the time OS (or hypervisor) boot starts.
> >
> 
> For the "normal memory" parts, do you mean parts that map the host
> NVDIMM device's address space range to the guest? I'm going to
> implement that part in hypervisor and expose it as a hypercall so that
> it can be used by QEMU.
> 
> Haozhong
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.