[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains



On Thu, Mar 30, 2017 at 1:21 AM, Haozhong Zhang
<haozhong.zhang@xxxxxxxxx> wrote:
> On 03/29/17 21:20 -0700, Dan Williams wrote:
>> On Sun, Mar 19, 2017 at 5:09 PM, Haozhong Zhang
>> <haozhong.zhang@xxxxxxxxx> wrote:
>> > This is v2 RFC patch series to add vNVDIMM support to HVM domains.
>> > v1 can be found at 
>> > https://lists.xenproject.org/archives/html/xen-devel/2016-10/msg00424.html.
>> >
>> > No label and no _DSM except function 0 "query implemented functions"
>> > is supported by this version, but they will be added by future patches.
>> >
>> > The corresponding Qemu patch series is sent in another thread
>> > "[RFC QEMU PATCH v2 00/10] Implement vNVDIMM for Xen HVM guest".
>> >
>> > All patch series can be found at
>> >   Xen:  https://github.com/hzzhan9/xen.git nvdimm-rfc-v2
>> >   Qemu: https://github.com/hzzhan9/qemu.git xen-nvdimm-rfc-v2
>> >
>> > Changes in v2
>> > ==============
>> >
>> > - One of the primary changes in v2 is dropping the linux kernel
>> >   patches, which were used to reserve on host pmem for placing its
>> >   frametable and M2P table. In v2, we add a management tool xen-ndctl
>> >   which is used in Dom0 to notify Xen hypervisor of which storage can
>> >   be used to manage the host pmem.
>> >
>> >   For example,
>> >   1.   xen-ndctl setup 0x240000 0x380000 0x380000 0x3c0000
>> >     tells Xen hypervisor to use host pmem pages at MFN 0x380000 ~
>> >     0x3c0000 to manage host pmem pages at MFN 0x240000 ~ 0x380000.
>> >     I.e. the former is used to place the frame table and M2P table of
>> >     both ranges of pmem pages.
>> >
>> >   2.   xen-ndctl setup 0x240000 0x380000
>> >     tells Xen hypervisor to use the regular RAM to manage the host
>> >     pmem pages at MFN 0x240000 ~ 0x380000. I.e the regular RMA is used
>> >     to place the frame table and M2P table.
>> >
>> > - Another primary change in v2 is dropping the support to map files on
>> >   the host pmem to HVM domains as virtual NVDIMMs, as I cannot find a
>> >   stable to fix the fiemap of host files. Instead, we can rely on the
>> >   ability added in Linux kernel v4.9 that enables creating multiple
>> >   pmem namespaces on a single nvdimm interleave set.
>>
>> This restriction is unfortunate, and it seems to limit the future
>> architecture of the pmem driver. We may not always be able to
>> guarantee a contiguous physical address range to Xen for a given
>> namespace and may want to concatenate disjoint physical address ranges
>> into a logically contiguous namespace.
>>
>
> The hypervisor code that actual maps host pmem address to guest does
> not require the host address be contiguous. We can modify the
> toolstack code that get the address range from a namespace to support
> passing multiple address ranges to Xen hypervisor
>
>> Is there a resource I can read more about why the hypervisor needs to
>> have this M2P mapping for nvdimm support?
>
> M2P is basically an array of frame numbers. It's indexed by the host
> page frame number, or the machine frame number (MFN) in Xen's
> definition. The n'th entry records the guest page frame number that is
> mapped to MFN n. M2P is one of the core data structures used in Xen
> memory management, and is used to convert MFN to guest PFN. A
> read-only version of M2P is also exposed as part of ABI to guest. In
> the previous design discussion, we decided to put the management of
> NVDIMM in the existing Xen memory management as much as possible, so
> we need to build M2P for NVDIMM as well.
>

Thanks, but what I don't understand is why this M2P lookup is needed?
Does Xen establish this metadata for PCI mmio ranges as well? What Xen
memory management operations does this enable? Sorry if these are
basic Xen questions, I'm just looking to see if we can make the
mapping support more dynamic. For example, what if we wanted to change
the MFN to guest PFN relationship after every fault?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.