[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC XEN PATCH v2 00/15] Add vNVDIMM support to HVM domains



On Sat, Apr 1, 2017 at 4:54 AM, Konrad Rzeszutek Wilk <konrad@xxxxxxxxxx> wrote:
> ..snip..
>> >> Is there a resource I can read more about why the hypervisor needs to
>> >> have this M2P mapping for nvdimm support?
>> >
>> > M2P is basically an array of frame numbers. It's indexed by the host
>> > page frame number, or the machine frame number (MFN) in Xen's
>> > definition. The n'th entry records the guest page frame number that is
>> > mapped to MFN n. M2P is one of the core data structures used in Xen
>> > memory management, and is used to convert MFN to guest PFN. A
>> > read-only version of M2P is also exposed as part of ABI to guest. In
>> > the previous design discussion, we decided to put the management of
>> > NVDIMM in the existing Xen memory management as much as possible, so
>> > we need to build M2P for NVDIMM as well.
>> >
>>
>> Thanks, but what I don't understand is why this M2P lookup is needed?
>
> Xen uses it to construct the EPT page tables for the guests.
>
>> Does Xen establish this metadata for PCI mmio ranges as well? What Xen
>
> It doesn't have that (M2P) for PCI MMIO ranges. For those it has an
> ranges construct (since those are usually contingous and given
> in ranges to a guest).

So, I'm confused again. This patchset / enabling requires both M2P and
contiguous PMEM ranges. If the PMEM is contiguous it seems you don't
need M2P and can just reuse the MMIO enabling, or am I missing
something?

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.