[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains



On Sun, Mar 19, 2017 at 5:09 PM, Haozhong Zhang
<haozhong.zhang@xxxxxxxxx> wrote:
> This is v2 RFC patch series to add vNVDIMM support to HVM domains.
> v1 can be found at 
> https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg00424.html.
>
> No label and no _DSM except function 0 "query implemented functions"
> is supported by this version, but they will be added by future patches.
>
> The corresponding Qemu patch series is sent in another thread
> "[RFC QEMU PATCH v2 00/10] Implement vNVDIMM for Xen HVM guest".
>
> All patch series can be found at
>   Xen:  https://github.com/hzzhan9/xen.git nvdimm-rfc-v2
>   Qemu: https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v2
>
> Changes in v2
> ==============
>
> - One of the primary changes in v2 is dropping the linux kernel
>   patches, which were used to reserve on host pmem for placing its
>   frametable and M2P table. In v2, we add a management tool xen-ndctl
>   which is used in Dom0 to notify Xen hypervisor of which storage can
>   be used to manage the host pmem.
>
>   For example,
>   1.   xen-ndctl setup 0x240000 0x380000 0x380000 0x3c0000
>     tells Xen hypervisor to use host pmem pages at MFN 0x380000 ~
>     0x3c0000 to manage host pmem pages at MFN 0x240000 ~ 0x380000.
>     I.e. the former is used to place the frame table and M2P table of
>     both ranges of pmem pages.
>
>   2.   xen-ndctl setup 0x240000 0x380000
>     tells Xen hypervisor to use the regular RAM to manage the host
>     pmem pages at MFN 0x240000 ~ 0x380000. I.e the regular RMA is used
>     to place the frame table and M2P table.
>
> - Another primary change in v2 is dropping the support to map files on
>   the host pmem to HVM domains as virtual NVDIMMs, as I cannot find a
>   stable to fix the fiemap of host files. Instead, we can rely on the
>   ability added in Linux kernel v4.9 that enables creating multiple
>   pmem namespaces on a single nvdimm interleave set.

This restriction is unfortunate, and it seems to limit the future
architecture of the pmem driver. We may not always be able to
guarantee a contiguous physical address range to Xen for a given
namespace and may want to concatenate disjoint physical address ranges
into a logically contiguous namespace.

Is there a resource I can read more about why the hypervisor needs to
have this M2P mapping for nvdimm support?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.