[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Very high load and 100% I/O wait in DomU after file activity


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Vidar Salberg Normann" <vidarno@xxxxxxxxx>
  • Date: Mon, 19 Jan 2009 13:46:02 +0100
  • Delivery-date: Mon, 19 Jan 2009 04:46:46 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=A4JCoIej1KF66BfO6ozv3G9jKD7iujyvCrMIxsEMyBPQZMBXkigQI83tHaN9bM75ln zoEzKtkKbHnRss4fICz2MuT5hjmjehiwnwdxjjvIqHFNcu3SA/W9twtO+RL+wjswhQ9u eL7H/enI+3qs5d7yQun6wFVpwOw45vKdAr+AM=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi,

I'm running Xen 3.3 on a Centos Dom0, and have two DomUs,
running Centos 4.7 and 5.2 respectively. Both DomUs consistently
suffer from high load when transferring a lot of data, for example from
rsync, ftp or wget. Here is an output from "top" on the Centos 5.2
DomU when I try to download a Linux ISO image with wget:

top - 12:58:22 up 4 days,  2:07,  3 users,  load average: 4.73, 3.15, 1.37
Tasks:  64 total,   2 running,  62 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,  0.0%id,100.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2097152k total,  1309028k used,   788124k free,   125984k buffers
Swap:  3148700k total,        0k used,  3148700k free,   975024k cached


I have seen it reach load of over 30, even after killing the process
(wget, ftp or
rsync) that caused the load to go through the roof. It seems this issue is
somehow related to http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1371,
except I get no visible errors except for the high load and the DomU becoming
unresponsive. I've seen an earlier discussion about this topic in the
archives of
this list, with the subject "Performance Issues: I/O Wait" by Nick Couchman
but no solution. I have already turned off transmission checksums on
the NICs with ethtool -K [nicname] tx off as suggested but no luck..

Using iostat, here is the output immediately before and after the load
increases:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                        3.38    0.00   30.02    8.15    0.99   57.46

Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz
avgqu-sz   await  svctm  %util
xvda              0.00   187.28  0.00  0.20     0.00     8.75    88.00
   12.77  124.00 404.00   8.03

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
                         0.00    0.00    0.00  100.00    0.00    0.00

Device:         rrqm/s   wrqm/s   r/s   w/s    rkB/s    wkB/s avgrq-sz
avgqu-sz   await  svctm  %util
xvda              0.00   170.54  0.00 34.67     0.00   861.72    49.71
  148.02 3641.11  28.90 100.20

Any suggestions?

Regards,
Vidar

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.