[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [XEN] using shmfs for swapspace



On Tue, 4 Jan 2005, Mark Williamson wrote:

for doing opportunistic page recycling ("I dont need this page but when
I ask for it back please tell me if you trashed the content")

We've talked about doing this but AFAIK nobody has gotten round to it yet because there hasn't been a pressing need (IIRC, it was on the todo list when Xen 1.0 came out).

IMHO, it doesn't look terribly difficult but would require (hopefully small) modifications to the architecture independent code, plus a little bit of support code in Xen.

The architecture independant changes are fine, since
they're also useful for S390(x), PPC64 and UML...

I'd quite like to look at this one fine day but I suspect there are more useful things I should do first...

I wonder if the same effect could be achieved by just
measuring the VM pressure inside the guests and
ballooning the guests as required, letting them grow
and shrink with their workloads.

That wouldn't need many kernel changes, maybe just a
few extra statistics, or maybe all the needed stats
already exist.  It would also allow more complex
policy to be done in userspace, eg. dealing with Xen
guests of different priority...

--
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan


-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.