[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Xen/ia64 - global or per VP VHPT



Magenheimer, Dan (HP Labs Fort Collins) wrote:
> Hi Eddie --
>>> I see what you mean about vMMU.  Is it ifdef'd or
>>> a runtime choice?  I'd like to be able to do some
>>> comparisons of a global lVHPT vs a per-domain lVHPT.
>> Besides performance, global VHPT is hard to support multiple
>> page size case like Dom1 uses 16KB default page size while
>> DOM2 uses 4K page size. Also once a VM set rr.ps to a new
>> value, I guess you need to purge the whole VHPT that cause
>> serious problem when the scalability becomes large.
> 
> This is not as much of an issue for paravirtualized domains
> as a minimum page size can be specified.  Also rr.ps
> can be virtualized (and in fact is virtualized in the
> current implementation).
With a single  global VHPT and force the same page size limitation, it
means all the Domaons must be paravirtualized to a hard defined pag
size; this is definitely to limit the capability of the Xen/ia64.   Will
this also imply only certain version of the Domains can run on a same
platform?
What will be the scability issue with a single VHPT table?  Imaging
multi-VP/Multi-LP, all the LPs walking on the same table, you would need
to global purge or send IPI to all processor for purge a single entry.
Costly! 
> 
> I agree that purging is a problem, however Linux does not
> currently change page sizes frequently.
Again, I hope we are not limiting only one OS version to run on this
Xen/ia64.  I believe Linux also needs to purge entries other than page
size changes!
Through per-VP VHPT and VT-I feature of ia64, we can expend Xen/ia64
capability to enable multiple unmodified OS run on Xen/ia64 without
knowing what the page size the Domain is using.  

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.