[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-ia64-devel] RE: Xen/ia64 - global or per VP VHPT
> If your domains can grow/shrink (the ballooning case you mention above) to > use 4GB - 64 GB of memory, then in the case of a single VHPT it is OK to > just allocate X, although this is wasteful if you are not using all 64 GB > (e.g., you are running two domains each using 4GB of memory), but you do > not have a choice (other than dynamically growing/shrinking the VHPT). In > the case of multiple VHPTs you will have a similar problem, although the > size could be 16X, if all domains can grow to 64GB and if there is no > growing/shrinking of the VHPT. In other words, growing/shrinking the VHPT > would be more critical to your example if using per domain VHPTs, but it > would also be important (to avoid allocating size X when the VMs running > are not using all the 64GB of memory) in the case of a single VHPT. I'm not so sure about this point: with the global VHPT case you can just always allocate X. It's not really wasteful because if the domains *aren't* using all the machine memory then the VHPT reserved memory wouldn't be used for anything else. If they *are* using all the machine memory then the global VHPT needs to take up X GB anyhow. This particular point seems like an advantage for the global VHPT case since it makes provisioning easier (always allocate X and it doesn't matter what the domains do). I'm not lending my support to either side in this debate though because I don't know enough of the specifics ;-). I'm an x86/Xen/Linux guy not an IPF guy and will shut up now :-) $0.02, Cheers, Mark > > > On a related note, I think you said something about needing to > > guarantee forward progress. In your implementation, is the VHPT > > required for this? If so, what happens when a domain migrates > > at the exact point where the VHPT is needed to guarantee forward > > progress? Or do you plan on moving the VHPT as part of migration? > > What I think I said is that having collision chains from the VHPT is > critical to avoiding forward progress issues. The problem is that IPF may > need up to 3 different translations for a single instruction. If you do not > have collision chains and the translations required for a single > instruction (I-side, D-side and RSE) happen to hash to the same VHPT entry, > you may get into a situation in which the entries keep colliding with each > other and the guest makes no forward progress (it enters a state in which > it alternates the I-side, D-side and RSE faults). By the way, this is not > just > theoretical, I have seen it happen in two different implementations of IPF > virtual MMUs. > > First a clarification, there is no relationship (that I know of) between > migration and forward progress issues, but I will comment on the migration > example anyway. Moving the VHPT is not necessary (actually, in general, is > not possible, as that would imply that the same machine pages are allocated > in the target machine for the VM being moved as in the source machine) at > migration time, you just rebuild it in the new machine (I am assuming that > the contents of the VHPT is demand built, right?). In a migration case you > can start with an empty (all entries are invalid in the VHPT) and let the > guest generate page faults and the VM build the VHPT. > > >> I see it a bit more black and white than you do. > > > > Black and white invariably implies a certain set of assumptions. > > I'm not questioning your position given your set of assumptions, > > I'm questioning your assumptions -- as those assumption may > > make good sense in some situations (e.g. VT has to implement > > all possible cases) but less so in others (e.g. paravirtualized). > > You keep on making this differentiation between full and paravirtualization > (but I don't think that is very relevant to what I am saying), please > explain how in a paravirtualized guest the example I presented above of 10 > UP VMs having to synchronize updates to the VHPT is not an issue. > > > Dan > > Bert _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |