[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] testing the balloon driver



On Thu, 21 Oct 2004, Ian Pratt wrote:

> I'm kinda surprised that the balloon driver's aggressive memory
> grabbing doesn't cause the OOM killer to start selecting
> victims for extermination. 

It appears the system is ok, as long as you have enough
swap.

> If it really seems stable then maybe we don't have to add rate
> limiting to the balloon driver after all? We grab the pages with
> GFP_HIGHUSER. Maybe that's sufficiently non aggressive as-is?

It may be aggressive, but it's no worse than the worst
userspace programs.  This means the VM is already able
to withstand this kind of load.

> I still think we should make all increase/decrease reservation
> calls (e.g. those associated with netfront) go through the
> balloon driver so that we can handle some of the low memory cases
> more gracefully.

I suspect that for the drivers we might want a mempool,
so it is guaranteed that the system will make progress.
I'll try harder to make the system crash, so I can tell
for sure whether this is needed.

One thing we should probably do is add the balloon
memory size to /proc/meminfo, so utilities can see
how much memory we really have and how much has been
given up.



-- 
"Debugging is twice as hard as writing the code in the first place.
Therefore, if you write the code as cleverly as possible, you are,
by definition, not smart enough to debug it." - Brian W. Kernighan



-------------------------------------------------------
This SF.net email is sponsored by: IT Product Guide on ITManagersJournal
Use IT products in your business? Tell us what you think of them. Give us
Your Opinions, Get Free ThinkGeek Gift Certificates! Click to find out more
http://productguide.itmanagersjournal.com/guidepromo.tmpl
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.