[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] [Fwd: XCP - extreme high load on pool master]



This console problem I canÂempathiseÂwith.

Accessing serial consoles via VNC doesnt really make sense to me.

I'd like some XCP plugin that can publish consoles via ssh/telnet and xenconsole.

And cloud customers could login and get some simple out of band access to their linux based vms.




On 21 July 2010 21:51, George Shuklin <george.shuklin@xxxxxxxxx> wrote:
Well... I can accept idea of 'high load' for master, but my main concern
is the problem of single thread. Most of CPU time used by single xapi
process, so, if I slightly increase number of hosts and VM on hosts (for
example, 40 VM per host, 16 hosts will do about 640 VM per pool) and if
xapi will not be able to serve every request in single thread... I don't
know what will happens but I don't like it already.

About console problem...

After few tests with http(s) tunneling I stops at simple ssh tunneling
from host localhost to my machine localhost (I connect by using -L
switch of ssh and use xvncviewer localhost:59xx).

One other problem is detection of port number for certain VM... Right
now I use some hack like xe vm-list uuid=... params=domid; ps aux|grep
(this domid);netstat -lpn (this pid) - but it not very accurate and not
very scriptable...



Ð ÐÑÐ, 21/07/2010 Ð 16:39 -0400, Vern Burke ÐÐÑÐÑ:
> Thousands of VMs on a single XCP pool? It's just my opinion of course
> but I wouldn't try to run 100:1 (or worse) virtualization ratios unless
> you're running 12 cores (and a ton of memory) or better in a box.
>
> Keep in mind that the pool master is doing a ton of work for the entire
> pool, which explains why its load is higher than the slaves. In my
> cloud, I generally reserve the pool master for admin work rather than
> running production workloads on it.
>
> The reason for this is that there's still an ongoing bug in XCP's
> developers console that you can only connect to the console of a VM
> that's running on the pool master. Try to connect to a VM that's on any
> of the slaves and you get just a blank white window.
>
>
> Vern Burke
>
> SwiftWater Telecom
> http://www.swiftwatertel.com
> Xen Cloud Control System
> http://www.xencloudcontrol.com
>
>
> On 7/21/2010 4:13 PM, George Shuklin wrote:
> > Good day.
> >
> > We trying to test XCP cloud under some product-like load (4 hosts, each with 24Gb mem and 8 cores)
> >
> > But with just about 30-40 virtual machines I got an extreme load on dom0
> > on pool master host: LA is about 3.5-6, and most time are used by xapi
> > and stunnel processes.
> >
> > It's really bother me: what happens on higher load with few thousands of
> > VMs with about 10-16 hosts in pool...
> >
> > top data:
> >
> > Tasks: Â95 total, Â 3 running, Â91 sleeping, Â 0 stopped, Â 1 zombie
> > Cpu(s):19.4%us,42.1%sy,0.0%ni,35.8%id, Â1.3%wa, Â0.0%hi, Â1.0%si, 0.3%st
> > Mem: Â Â746496k total, Â 731256k used, Â Â15240k free, Â Â31372k buffers
> > Swap: Â 524280k total, Â Â Â128k used, Â 524152k free, Â 498872k cached
> >
> > PR ÂNI ÂVIRT ÂRES ÂSHR S %CPU %MEM Â ÂTIME+ COMMAND
> > 17 Â-3 Â315m Â14m 5844 S 52.3 Â2.0 Â 5370:41 xapi
> > 17 Â-3 22524 Â15m 1192 S Â8.3 Â2.2 875:16.18 stunnel
> > 15 Â-5 Â Â 0 Â Â0 Â Â0 S Â0.7 Â0.0 Â54:28.78 netback
> > 10 -10 Â6384 1868 Â892 S Â0.3 Â0.3 Â22:03.14 ovs-vswitchd
> >
> > dom0 on non-master hosts are loaded about 25-30% each.
> >
> >
> >
> > _______________________________________________
> > Xen-users mailing list
> > Xen-users@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-users
> >



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.