[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] DomU IO issue


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: SZABO Zsolt <szazs@xxxxxxxxx>
  • Date: Wed, 17 Jun 2009 00:25:37 +0200 (CEST)
  • Delivery-date: Tue, 16 Jun 2009 15:27:46 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

I also have strange IO problem, though not sure if it is the problem of the driver of the adapter or of other...

I would use md RAID1 devices and lvm on the top of them... and now the question is the filesystem: ext3, reiserfs or xfs or else?

I have an inherited system with reiserfs on the lvm partitions above the md devices both in dom0 and domU-s. I also have a separate xensave lvm partition but with xfs... The xm save command for a domU with 4-6G mem sometimes takes 8-10 min! Though I use it not frequently and therefore I did not do much investigation about it....

It is likely that the IO operations are also influenced whether the dom0 has a dedicated core or not...

Are there docs or howtos about these (optimal CPU setup, partitioning and fs choosing) or benchmarks? I am not an expert and I am even not sure what should be an appropriate test procedure...

Currently I use debian-etch and xen-3.2.1 with 2.6.18 kernel

--
Zsolt
As probably Fajar has suggested, I have:

(XEN) Command line: console=vga vga=gfx-1024x768x8 dom0_mem=512M dom0_vcpus_pin apic_verbosity=debug cpufreq=dom0-kernel acpi=on numa=on

in xend-config.sxp:
(dom0-min-mem 196)
(dom0-cpus 1)

xm list:
Name                                   ID   Mem VCPUs      State   Time(s)
Domain-0                                0   512     1     r-----  14695.7
cosmos                                  1   512     1     -b----   4275.0
xp_hvm                                  2   512     1     r----- 287362.6
linserver                               3  6144     4     -b----   3366.1
w2003_hvm                               6  6144     4     ------   4805.5

xm vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU Aff.
Domain-0                             0     0     0   r--   14684.9 0
Domain-0                             0     1     -   --p       1.8 1
Domain-0                             0     2     -   --p       1.8 2
Domain-0                             0     3     -   --p       1.3 3
Domain-0                             0     4     -   --p       1.1 4
Domain-0                             0     5     -   --p       1.5 5
Domain-0                             0     6     -   --p       1.9 6
Domain-0                             0     7     -   --p       2.9 7
cosmos                               1     0     1   -b-    4276.2 any cpu
xp_hvm                               2     0     3   r--  287397.8 3-4
linserver                            3     0     5   -b-    2482.6 any cpu
linserver                            3     1     5   -b-     182.0 any cpu
linserver                            3     2     4   -b-     423.7 any cpu
linserver                            3     3     2   -b-     277.8 any cpu
w2003_hvm                            6     0     6   -b-    1120.6 4-7
w2003_hvm                            6     1     7   -b-     507.5 4-7
w2003_hvm                            6     2     4   -b-    2617.5 4-7
w2003_hvm                            6     3     5   -b-     560.8 4-7

(cosmos and linserver got "any cpu" after xm restore... before xm save they had 3-4 and 1-3,5... but I am not really sure what woulde be the optimal config: cosmos and xp_hvm have not much load, linserver and w2003_hvm serve a 15-20 client classroom but the terminals usually have either linux only or windows rdesktop only sessions...)

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.