[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] IO intensive guests - how to design for best performance

  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Kevin Maguire <k.c.f.maguire@xxxxxxxxx>
  • Date: Thu, 24 Jun 2010 14:03:06 +0200
  • Delivery-date: Thu, 24 Jun 2010 05:04:28 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=dHX6J23inE9rE/tsZyXAYeDGcOuHU4eAC/Dp+PMQyCjp2fnoMXe4hwO+4G36qchG9I Nap35UOA/0njPdwBK9FyGiHmi4fPeFlGDosbxT+zrvCTOXe2Tfjso+Li36eelDpPJNEQ /leqmCZW8Dxww2LsAeMmdBt63vFpOANeJyjsQ=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>


I am trying to engineer a HA xen solution for a specific application workload.

I will use:

*) 2 multicore systems (maybe 32 or 48 cores) with lots of RAM (256 GB)
*) dom0 OS will be RHEL 5.5
*) I would prefer to use xen as bundled by the distribution, but if
required features are found in later releases then this can be
*) the servers are connected to the SAN
*) I have about 10 TB of shared storage, and will use around 20-25
RHEL paravirt guests
*) The HA I will manage with heartbeat, probably use use clvmd for the
shared storage

My concern is to get the most out of the system in terms of IO.  The
guests will have a range of vCPUs assigned, from 1 to 8 say, and their
workload varies over time. when they are doing some work it is both
I/O and CPU intensive. It is only in unlikely use cases that all or
most guests are very busy at the same time.

The current solution to this workload is a cluster of nodes with
either GFS (using shared SAN storage)  or local disks, of which both
approaches have some merits.  However I am not tied to that
architecture at all.

There seems a lot (too many!) of options here

*) created a large LUN / LVM volume on my SAN, and pass it to the
guests and use GFS/GFS2
*) same thing, except use OCFS2
*) split my SAN storage into man LUNs / LVM volumes, and export 1
chunk per VM via phys: or tap:... interfaces
*) more complex PCI-passthru configurations giving guest direct (?)
access to storage
*) create a big ext3/xfs/... file system on dom0 and export using NFS
to guests (a kind of loopback?)
*) others ...

I ask really for any advise and experiences of list members faced with
similar problems and what they found best.


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.