[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen and I/O Intensive Loads



> I'm not really sure that bandwidth is an issue - perhaps latency more
> than that.  I don't think the amount of data is what's causing the
> problem; rather the number of transactions that the e-mail system is
> trying to do on the volume.  The file sizes are actually pretty small
> - 1 to 4 Kb on average, so I think it's the large number of these
> files that it has to try to read rather than streaming a large amount
> of data.  Both the SAN and the iostat output on both dom0 and domU
> indicate somewhere between 5000 and 20000 kB/s read rates - that's
> somewhere around 40Mb/s to 160Mb/s, which is well within the
> capability of the FC connection.  The SAN is indicating I/O operations
> between 500 and 1500 I/O requests per second, which I assume is what's
> causing the problem.

What's the backend inside the SAN look like?  Look into amount of cache,
number of spindles, RAID used, what else is using those spindles, etc. 

500-1500 iops isn't a lot for a "SAN" in general, but given that your FC
disks are going to get around 200 worst-case iops, you'd still need
quite a few of them to push 1500 continuously (with your cache picking
up some of the spikes).  And that depends on workload (read/write,
random or not, block size) and RAID type.

In case you haven't already, I'd look into the usual filesystem
performance guides and do things like turning off atime and that lot.
My feeling on this is that you're going to need to drive down those iops
numbers.

What were your results on trying something other than xfs?

John


-- 
John Madden
Sr UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmadden@xxxxxxxxxxx


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.