[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen dom0 load affecting domUs



On 12/04/2012 08:42 PM, Riyan S wrote:

The domUs are stored on a HW raid device 3ware 9750-4i (raid 10 ).Ideally what should i set the  domu Scheduler attributes to? Before i had the VM images on h/w raid,i did try putting them on a NFS storage and the performance on the VMs was absolutely terrible & more over i observed lot of issues like cpu taking 100 % and times NFS locking issues e.t.c

Also,how would a VM which is disk intensve affect the performance of the Dom0?I know that using file:/ uses a loopback device in Dom0,and it uses local dom0 cache,does that induce any sort of Dom0 overhead ?

Date: Tue, 4 Dec 2012 20:06:12 +0100
From: skupko.sk@xxxxxxxxx
To: tesla.coil@xxxxxxxx
CC: xen-users@xxxxxxxxxxxxx
Subject: Re: [Xen-users] Xen dom0 load affecting domUs

On 12/04/2012 07:24 PM, Riyan S wrote:
Hello folks,

I have a Xen server running on Centos 6.2 with 3.6.6-1.el6xen.x86_64 as the Dom0 kernel, and i  have dedicted 2 CPUs and 4G ram for the Dom0.I have close to 4-5 Virtual machines running on the Xen server.

The problem is that for some reason,carrying outany CPU/Disk intensive task on the Dom0 seems to be affecting the DomUs adversly.For ex.i noticed that my Domus become extremly sluggish if i use a 'dd' command to create  a 80G file on the Dom0,is this behaviour normal ? I guess,may be i should use a Sparse file instead ?

So if i have more VMs,do i need to allocate more resources for the Dom0 in terms of memory and cpu ? As of now all the VMs use a loopback device in the Dom0 .

The dom0 itself is not much memory or CPU consuming. Therefore 512 or 768MB should be enough for dom0. The most important part for successfully running more domUs on your server is to have good storage design and fast IO subsystem. Some (real) HW RAID controller with cache or FibreChannel or iSCSI over fast Ethernet is something you should think about.
Anyway the behavior you are experiencing is expected and normal. This is usually something the VPS users do not think about - enough memory and CPU doesn't mean your virtual server will run 'fast'. ;-)
I am using 'ionice -c 3' when some intensive IO load is needed to perform on dom0. You can also use the 'nice' command if you wish. (test it with running '/usr/bin/time -v' ;-) )
The other thing you can take care of is to set the scheduler attributes of every domU (via 'xm sched-cred' for example).

--
Peter Viskup

_______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxx http://lists.xen.org/xen-users

I do not have experience with NFS nor file-based disks but there are many discussions on the net on this topic. Also on this xen-users mailing list - try to search for lvm vs. file storage and you will see.
For the scheduler read the 'xm' manual or search for the reference on the net and xen wiki - http://wiki.xen.org/wiki/Credit_Scheduler.
Every IO is consuming also CPU so if you set your attributes for credit scheduler properly you will have fair setup for all domUs. It is good practice to set the 'weight' to amount of memory you will assign to the domU, but your requirements should vary.
Your RAID controller is similar to my HP SA p410i - it should provide you enough performance.
The file-based disks are using the FS cache on the dom0, but as far as you have such good storage controller installed it's better to setup LVM based disks for domUs. With such setup all the IO requests of domU are directly sent to storage controller and you will use only cache of the controller and no additional memory of dom0. Look on the internet for other arguments.
In case you will play with your file-based disks you can test the 'file:' and 'tap:aio:' drivers and test. The second one is not using the FS cache on dom0.

My setup is like this (working like a charm):
 - in dom0 datavg for domU disks
 - every LV in datavg is PV for domU
 - in domU the xvd disk is used as an LVM 'drive'
 - VG in domU is named based on the domU name
 - in domU VG there are more LVs created with separate /:/usr:/var:/var/log:/tmp (and some other application FS) for system/application security and manageability reason

Extend disk for domU is just: lvextend for the particular LV in datavg on dom0. Then pvresize on domU and that's it!

--
Peter
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.