[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: Hugepage support to tmem.



Hi Ash --

>       We have gone through the code of hugepages or superpages support
> in Xen. We found that whenever a domain requests a page of order = 0 or
> order = 9, it tries to allocate super page or singleton page. No code
> exists for order > 0 and order < 9.

I believe that is correct, although I think there is also code
for 1GB pages, at least for HVM domains.

>        In our study, we also came across a point that if a domain
> requests a
> 2 MB page and it gets one, then the domain will not receive 4 KB page
> during
> its stay , which means that a single domain cannot have simultaneous
> support
> to both normal page and super pages. Is it really so?

This doesn't sound correct, though I am not an expert in the
superpage code.
 
>       Some part of the code says that if itâs not possible to allocate
> super
> page,
> 
> then it allocates a linked list consisting of 512 4kB pages i.e. from
> PoD(1GB->2M
> 
> or 2M->4k). The performance is improved in case of huge pages due to
> its
> contiguity. But in the case above, does it mean that the performance is
> degraded?

PoD (populate-on-demand) is used only for HVM (fully-virtualized
domains) but, yes, if the 2MB guest-physical page is emulated in
Xen by 512 4KB host-physical pages, there is a performance degradation.

> I think that in design consideration we need to solve this problem of
> splitting.
> 
> According to code such splitting is done in HAP, so what exactly
> happens in shadow mode?

I think Tim Deegan is much more of an expert in this area than
I am.  I have cc'ed him.

Note that tmem is primarily used in PV domains.  It can be used
in an HVM domain, but requires additional patches (PV-on-HVM patches
from Stefano Stabellini) in the guest and, although I got this
working and tested once, I do not know if it is still working.
In any case my knowledge of memory code supporting HVM domains is
very limited.

My thoughts about working on 2MB pages for tmem were somewhat
different: (1) Change the in-guest balloon driver to only
relinquish/reclaim 2MB pages... this is a patch that Jeremy
Fitzhardinge has worked on but I don't know the status.  (2)
change tmem's memory allocation to obtain only contiguous 2MB
physical pages from the Xen TLSF memory allocator and
(3) tmem would manage ephemeral tmem 4KB pages inside of the physical
2MB pages in a way such that the function tmem_ensure_avail_pages()
would be able to easily evict a physical 2MB page (including all
ephemeral 4KB tmem pages inside of it).

I have not thought through a complete design for this... it
may be very difficult or nearly impossible.  But this is a rough
description of what I was thinking about when I said a few weeks
ago that re-working tmem to work with 2MB pages would be
a good project.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.