[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen

On 04/22/16 06:36, Jan Beulich wrote:
> >>> On 22.04.16 at 14:26, <haozhong.zhang@xxxxxxxxx> wrote:
> > On 04/22/16 04:53, Jan Beulich wrote:
> >> Perhaps I have got confused by the back and forth. If we're to
> >> use struct page_info, then everything should be following a
> >> similar flow to what happens for normal RAM, i.e. normal page
> >> allocation, and normal assignment of pages to guests.
> >>
> > 
> > I'll follow the normal assignment of pages to guests for pmem, but not
> > the normal page allocation. Because allocation is difficult to always
> > get the same pmem area for the same guest every time. It still needs
> > input from others (e.g. toolstack) that can provide the exact address.
> Understood.
> > Because the address is now not decided by xen hypervisor, certain
> > permission track is needed. For this part, we will re-use the existing
> > one for MMIO. Directly using existing range struct for pmem may
> > consume too much space, so I proposed to choose different data
> > structures or put limitation on exiting range struct to avoid or
> > mitigate this problem.
> Why would these consume too much space? I'd expect there to be
> just one or very few chunks, just like is the case for MMIO ranges
> on devices.

As Ian Jackson indicated [1], there are several cases that a pmem page
can be accessed from more than one domains. Then every domain involved
needs a range struct to track its access permission to that pmem
page. In a worst case, e.g. the first of every two contiguous pages on
a pmem are assigned to a domain and are shared with all other domains,
though the size of range struct for a single domain maybe acceptable,
the total will still be very large.


[1] http://lists.xenproject.org/archives/html/xen-devel/2016-03/msg02309.html

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.