[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Hardware performance question : Disk RPM speed &XenPerformance


  • To: <admin@xxxxxxxxxxx>, <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: Jeff Sturm <jeff.sturm@xxxxxxxxxx>
  • Date: Mon, 13 Sep 2010 13:21:05 -0400
  • Cc:
  • Delivery-date: Mon, 13 Sep 2010 10:23:53 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: ActTWpQiAwaJe4xMTIymgLsS1rEtJwABfHMgAAFaGPA=
  • Thread-topic: [Xen-users] Hardware performance question : Disk RPM speed &XenPerformance

Agreed with what’s said below.  Traditional (Winchester) disk drives are incredibly slow devices relative to the rest of your computing environment (CPU, memory, network etc.).   You can’t do much with a pair of disks.  20-30 VM’s won’t work very well unless they each do very little I/O.

 

Cheap way to get more I/O throughput is to buy a big chassis and stuff lots of disks into it—as many as you can get.   The size of the disks isn’t important.  The quantity is.  15k drives often aren’t cost effective in such arrangements.  Most server chassis are optimized for PCI expansion and air flow, not storage, so an external chassis is often a necessity.  If cost is a factor you can buy a chassis that can be shared across 2 or more dom0’s.

 

In our environments, we tend to run anywhere from 4 up to about 8 domU’s per dom0, and no more than 4 dom0’s per disk array.  So one of our disk arrays (typically 14 disks in RAID10, plus a spare) may serve from 16 to 32 domU’s.  Performance overall is good with our workload.

 

It also helps to tune your Linux domU’s to reduce I/O.  I’ve found a few simple tricks that help:

 

-      Mount ext3 partitions with “noatime”

-      Configure syslogd not to sync file writes

-      Get rid of disk-intensive packages like mlocate

-      Use tmpfs for small, volatile file storage (e.g. /tmp)

 

Other tricks may be possible depending on the types of user applications you operate.

 

-Jeff

 

From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of admin@xxxxxxxxxxx
Sent: Monday, September 13, 2010 12:35 PM
To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] Hardware performance question : Disk RPM speed &XenPerformance

 

Each 7200RPM drive is good for about 100 IOPS.  Each 15k RPM SAS can usually handle 200 IOPS.  I would not personally try to run 20-30 VMs from two SATA drives, because it would almost surely lead to poor performance.  But I am basing that statement on the type of IO I typically see in our environment.  Your VMs might use totally different amounts of disk IO than my VMs do, so you may or may need not to worry about disk IO.  It really depends on the type of tasks each VM is doing.  One idea would be to measure the IOPS and graph it using MRTG.  Start with a few VMs and measure them for a few weeks to get an idea how much total disk IO is needed prior to moving all of the VMs into production.  Once you actually measure the disk IO for a while, then you can make an informed decision.

 

-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of kevin
Sent: Monday, September 13, 2010 10:45 AM
To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] Hardware performance question : Disk RPM speed & XenPerformance

 

Hello,

 

I am a relatively new user of Xen virtualization, so you’ll have to forgive the simplistic nature of my question.

 

I have a Dell R410 poweredge server (dual quad core CPUs + 32gb ram). I plan on utilizing this server with Xen.

 

The ‘dilemma’ I am having is whether or not to replace the 2x 500gb 7.2K RPM drives that came with the server with faster 300gb 15K RPM drives. Obviously drives that spin faster in general are a better thing. I am trying to avoid investing $1,000 more in obtaining these drives unless I feel it is absolutely necessary.

 

From Xen documentation, I couldn’t get enough of an idea of how disk write and the speed of disks might play in a potential bottleneck scenario when 20-30 VMs are ultimately going to be running on the box.

 

Does anyone have any experience or advise to share? Ultimately I don’t mind spending the extra money to replace the drives but I would love to hear what your thoughts might be as far as what kind of actual performance increases I might expect.

 

Thanks!

 

Kevin

 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.