[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] xen/tmem: Don't use map_domain_page for long-life-time pages.



On Thu, 2013-08-22 at 18:15 +0800, Josh Zhao wrote:
> 2013/6/13 Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>:
> > On Thu, Jun 13, 2013 at 02:24:11PM +0100, George Dunlap wrote:
> >> On 13/06/13 13:50, Konrad Rzeszutek Wilk wrote:
> >> >When using tmem with Xen 4.3 (and debug build) we end up with:
> >> >
> >> >(XEN) Xen BUG at domain_page.c:143
> >> >(XEN) ----[ Xen-4.3-unstable  x86_64  debug=y  Not tainted ]----
> >> >(XEN) CPU:    3
> >> >(XEN) RIP:    e008:[<ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
> >> >..
> >> >(XEN) Xen call trace:
> >> >(XEN)    [<ffff82c4c01606a7>] map_domain_page+0x61d/0x6e1
> >> >(XEN)    [<ffff82c4c01373de>] cli_get_page+0x15e/0x17b
> >> >(XEN)    [<ffff82c4c01377c4>] tmh_copy_from_client+0x150/0x284
> >> >(XEN)    [<ffff82c4c0135929>] do_tmem_put+0x323/0x5c4
> >> >(XEN)    [<ffff82c4c0136510>] do_tmem_op+0x5a0/0xbd0
> >> >(XEN)    [<ffff82c4c022391b>] syscall_enter+0xeb/0x145
> >> >(XEN)
> >> >
> >> >A bit of debugging revealed that the map_domain_page and unmap_domain_page
> >> >are meant for short life-time mappings. And that those mappings are 
> >> >finite.
> >> >In the 2 VCPU guest we only have 32 entries and once we have exhausted 
> >> >those
> >> >we trigger the BUG_ON condition.
> >> >
> >> >The two functions - tmh_persistent_pool_page_[get,put] are used by the 
> >> >xmem_pool
> >> >when xmem_pool_[alloc,free] are called. These xmem_pool_* function are 
> >> >wrapped
> >> >in macro and functions - the entry points are via: tmem_malloc
> >> >and tmem_page_alloc. In both cases the users are in the hypervisor and 
> >> >they
> >> >do not seem to suffer from using the hypervisor virtual addresses.
> >> >
> >> >CC: Bob Liu <bob.liu@xxxxxxxxxx>
> >> >CC: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
> >> >Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> >Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> >> >---
> >> >  xen/common/tmem_xen.c |    5 ++---
> >> >  1 files changed, 2 insertions(+), 3 deletions(-)
> >> >
> >> >diff --git a/xen/common/tmem_xen.c b/xen/common/tmem_xen.c
> >> >index 3a1f3c9..736a8c3 100644
> >> >--- a/xen/common/tmem_xen.c
> >> >+++ b/xen/common/tmem_xen.c
> >> >@@ -385,7 +385,7 @@ static void *tmh_persistent_pool_page_get(unsigned 
> >> >long size)
> >> >      if ( (pi = _tmh_alloc_page_thispool(d)) == NULL )
> >> >          return NULL;
> >> >      ASSERT(IS_VALID_PAGE(pi));
> >> >-    return __map_domain_page(pi);
> >> >+    return page_to_virt(pi);
> >>
> >> Did I understand correctly that the map_domain_page() was required
> >> on >5TiB systems, presumably because of limited virtual address
> >> space?  In which case this code will fail on those systems?
> >
> > Correct.
> 
> I don't understand why the map_domain_page() was required on >5TiB ?

Xen on x86_64 only keeps a direct 1:1 mapping of up to 5TiB of memory.
Anything higher than that is demand mapped.

This is similar to the distinction between lowmem and highmem on x86.

Unless you require that the memory is always mapped (i.e. in the
xenheap, which may be considerably smaller that 5TiB on non-x86_64
platforms) it is correct to use the domheap allocator and
map_domain_page etc. If your domheap page ends up in the 5TiB mapping
range then the "demand map" is a nop.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.