[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] crazy SWAP and RAM idea



Steffen, Under you 
On Sun, 2006-09-10 at 20:33 +0200, Steffen Heil wrote:
> Hi
> 
> > One of the biggest advantages to using Xen is that 
> > malloc()'ing processes that need to spawn children are able 
> > to do so in cache. This gives the dom-u performance that a 
> > non virtualized server would enjoy.
> 
> Could you explain this in more detail, please?
> 

When you start any daemon that accepts connections, that daemon will
read a configuration file and learn how many idle servers it should
launch. It will then malloc() and try to take enough contiguous space in
cache to do that. This avoids a child having to fork for every
connection.

> > SQL, Web , Email, All services will need to 
> > fork upon every connection.
> 
> No. Good current software doesn't. My SQL and Web-Servers are threaded so
> there is no need to fork, still searching for a way for email...
> 

Right, and they thread children in contiguous blocks of cache as you
instruct via your configurations. If you reduce the available ram, and
intentionally send them to disk, they won't find contiguous blocks and
won't cache children. Therefore, they must not only fork, but fork to
disk when a connection comes in.

> > You also risk DB corruption, (not to mention inode corruption 
> > [are you using ext3? I hope not, or you're looking to start 
> > grepping for your data using strings you hope exist in the 
> > files you lost] ). Just wait until a dom-u is being hammered 
> > and dom-0 experiences an unorderly shutdown, hope you've 
> > polished up on your regex to find your data :)
> 
> I don't understand that at all. First, if ext3 (which DOES have journaling)
> looses any data on unclean shutdown, then it is faulty. And yes, I use it on
> several machines. And secondly I think that farly depends how you implement
> domU partitions. Mine are LVM...
> 

The problem is by using swap you're using a type of memory that the
kernel frees immediately. If for some reason an interruption happens.
What is also going to happen is a 'clog' in I/O that is going to prevent
the inodes from syncing as they normally would.

You're putting applications in the swap space that's normally used for
this when the server finds itself under load, and flooding dentry. It
has nothing to do with the type or speed of storage, this is a kernel
phenomenon on the dom-u itself. You are in essence reducing the size of
a funnel and clogging the smaller end with bubble gum.

> > Why shoot your OS in the foot intentionally when other means 
> > exist to accomplish what you want to do? I just don't get 
> > it.. All your doing is not only retarding Xen, but also your 
> > guest OS's and their services ..
> > for what purpose?
> 
> Hey come on. I wrote "crazy idea" myself and I did definitely not plan to
> take this to production or customer domains...
> It was an idea and I thought maybe it's worth some discussion (as I still
> do).
> 

Don't take offense at my rather dry personality :) Remember, its hard to
convey tone of voice and diction through a mailing list. I'm just
extremely curious what need (if other than just to see if it would work)
is fueling the temporary insanity you're experiencing. 

> Remember that the main idea here has NOT been to do something as "ram
> bursts" (if I understand that correcty as automatic changes of domU memory),
> but to give dom0 a better way to control disk caching instead of relying on
> every single domain to have it's own cache.

Now things are sounding a little more sane :) The previous explanations
made it sound like you were trying to turn Xen into Virtuozzo.

> 
> The idea arose from a situation where I had the same (READ-ONLY) partition
> mounted on several domains which ALL had a lot of that data in cache
> memory... (Still working on problems with that machine, as I did't find a
> way to stop that.)

Why would you want to stop that? You can adjust how quickly your kernel
frees inactive cache rather easily, and tell your daemons not to keep as
many idle children in memory by tweaking the maximum # of connections
(or iterations) each child can have in its lifetime. If you need more
contiguous blocks of cache available for other things, just split up
your over-malloc()'ing services to separate dom-u's. 

It sounds like you're putting way to much gray matter into solving what
could be a really simple problem :)

HTH

--Tim
> 


> Regrads,
>   Steffen


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.