[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] XEN - networking and performance

On Fri, Oct 7, 2011 at 11:12 AM, Jeff Sturm <jeff.sturm@xxxxxxxxxx> wrote:
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Simon Hobson
> Sent: Thursday, October 06, 2011 4:51 PM
> Jeff Sturm wrote:
> >One of the traps we've run into when virtualizing moderately I/O-heavy
> >hosts, is not sizing our disk arrays right. ÂNot in terms of capacity
> >(terabytes) but in spindles. ÂIf each physical host normally has 4
> >dedicated disks, for example, virtualizing 8 of these onto a domU
> >attached to a disk array with 16 drives effectively cuts that ratio
> >from 4:1 down to 2:1. ÂLatency goes up, throughput goes down.
> Not only that, but you also guarantee that the I/O is across different areas of the disk
> (different partitions/logical volumes) and so you also virtually guarantee a lot more
> seek activity.

Very true, yes. ÂIn such an environment, sequential disk performance means very little. ÂYou need good random I/O throughput and that's hard to get with mechanical disks, beyond a few thousand iops. Â15k disks help, a larger chassis with more disks helps, but that's just throwing $$$ at the problem and doesn't really break through the iops barrier.

Anyone tried SSD with good results? ÂI'm sure capacity requirements can make it cost-prohibitive for many.


I'm running 3/4 TB of SSDs for my additional disks in my XCP cloud shared out as an iSCSI SR. I tried SSDs as storage for disk images under Xen and there were some strange issues so I'm not quite ready to put OS images of the VMs on it. I'll report back when I have more info.

Grant McWilliams

Some people, when confronted with a problem, think "I know, I'll use Windows."
Now they have two problems.

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.