[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/4] add support for vNVDIMM



On 01/20/16 14:35, Stefano Stabellini wrote:
> On Wed, 20 Jan 2016, Zhang, Haozhong wrote:
> > On 01/20/16 12:43, Stefano Stabellini wrote:
> > > On Wed, 20 Jan 2016, Tian, Kevin wrote:
> > > > > From: Zhang, Haozhong
> > > > > Sent: Tuesday, December 29, 2015 7:32 PM
> > > > > 
> > > > > This patch series is the Xen part patch to provide virtual NVDIMM to
> > > > > guest. The corresponding QEMU patch series is sent separately with the
> > > > > title "[PATCH 0/2] add vNVDIMM support for Xen".
> > > > > 
> > > > > * Background
> > > > > 
> > > > >  NVDIMM (Non-Volatile Dual In-line Memory Module) is going to be
> > > > >  supported on Intel's platform. NVDIMM devices are discovered via ACPI
> > > > >  and configured by _DSM method of NVDIMM device in ACPI. Some
> > > > >  documents can be found at
> > > > >  [1] ACPI 6: 
> > > > > http://www.uefi.org/sites/default/files/resources/ACPI_6.0.pdf
> > > > >  [2] NVDIMM Namespace: 
> > > > > http://pmem.io/documents/NVDIMM_Namespace_Spec.pdf
> > > > >  [3] DSM Interface Example:
> > > > > http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf
> > > > >  [4] Driver Writer's Guide:
> > > > > http://pmem.io/documents/NVDIMM_Driver_Writers_Guide.pdf
> > > > > 
> > > > >  The upstream QEMU (commits 5c42eef ~ 70d1fb9) has added support to
> > > > >  provide virtual NVDIMM in PMEM mode, in which NVDIMM devices are
> > > > >  mapped into CPU's address space and are accessed via normal memory
> > > > >  read/write and three special instructions (clflushopt/clwb/pcommit).
> > > > > 
> > > > >  This patch series and the corresponding QEMU patch series enable Xen
> > > > >  to provide vNVDIMM devices to HVM domains.
> > > > > 
> > > > > * Design
> > > > > 
> > > > >  Supporting vNVDIMM in PMEM mode has three requirements.
> > > > > 
> > > > 
> > > > Although this design is about vNVDIMM, some background of how pNVDIMM
> > > > is managed in Xen would be helpful to understand the whole design since
> > > > in PMEM mode you need map pNVDIMM into GFN addr space so there's
> > > > a matter of how pNVDIMM is allocated.
> > > 
> > > Yes, some background would be very helpful. Given that there are so many
> > > moving parts on this (Xen, the Dom0 kernel, QEMU, hvmloader, libxl)
> > > I suggest that we start with a design document for this feature.
> > 
> > Let me prepare a design document. Basically, it would include
> > following contents. Please let me know if you want anything additional
> > to be included.
> 
> Thank you!
> 
> 
> > * What NVDIMM is and how it is used
> > * Software interface of NVDIMM
> >   - ACPI NFIT: what parameters are recorded and their usage
> >   - ACPI SSDT: what _DSM methods are provided and their functionality
> >   - New instructions: clflushopt/clwb/pcommit
> > * How the linux kernel drives NVDIMM
> >   - ACPI parsing
> >   - Block device interface
> >   - Partition NVDIMM devices
> > * How KVM/QEMU implements vNVDIMM
> 
> This is a very good start.
> 
> 
> > * What I propose to implement vNVDIMM in Xen
> >   - Xen hypervisor/toolstack: new instruction enabling and address mapping
> >   - Dom0 Linux kernel: host NVDIMM driver
> >   - QEMU: virtual NFIT/SSDT, _DSM handling, and role in address mapping
> 
> This is OK. It might be also good to list other options that were
> discussed, but it is certainly not necessary in first instance.

I'll include them.

And one thing missed above:
* What I propose to implement vNVDIMM in Xen
  - Building vNFIT and vSSDT: copy them from QEMU to Xen toolstack

I know it is controversial and will record other options and my reason
for this choice.

Thanks,
Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.