[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/2] tmem: add full support for x86 up to 16Tb



On Mon, 2013-09-23 at 10:36 +0100, Andrew Cooper wrote:
> On 23/09/13 03:23, Bob Liu wrote:
> > tmem used to have code assuming to be able to directly access all memory.
> > This patch try to fix this problem fully so that tmem can work on x86 up to
> > 16Tb.
> >
> > tmem allocates pages mainly for two purposes.
> > 1. store pages passed from guests through the frontswap/cleancache frontend.
> > In this case tmem code has already using map_domain_page() before
> > accessing the memory, no need to change for 16Tb supporting.
> >
> > 2. store tmem meta data.
> > In this case, there is a problem if we use map_domain_page(). That's the 
> > mapping
> > entrys are limited, in the 2 VCPU guest we only have 32 entries and tmem 
> > can't
> > call unmap in a short time.
> > The fixing is allocate xen heap pages instead of domain heap for tmem meta
> > data.
> 
> This is a no go.
> 
> Xenheap pages just as limited as domain mapping slots (and perhaps moreso).
> 
> All Xenheap pages live inside the Xen mapped region, including in the
> upper virtual region of 32bit PV guests.

Not on a 64-bit hypervisor, which is all we support these days...

(32-bit ARM doesn't have the same limitations as 32-bit x86 did)

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.