[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Recommendations setting up shared storage for a build-farm for our software

  • To: "Ron Arts" <ron.arts@xxxxxxxx>, <xen-users@xxxxxxxxxxxxx>
  • From: "Todd H. Foster" <toddf@xxxxxxxxxxxxxxxxx>
  • Date: Mon, 30 Apr 2012 12:40:04 -0700
  • Cc: kwalter@xxxxxxxxxx
  • Delivery-date: Mon, 30 Apr 2012 19:41:24 +0000
  • List-id: Xen user discussion <xen-users.lists.xen.org>
  • Thread-index: Ac0mxbE3lpf5HwimTUWpSbzOvAf6ZQAQCWhg
  • Thread-topic: [Xen-users] Recommendations setting up shared storage for a build-farm for our software

What you are going to find is that the bottleneck is going to be the 1g

I have a very similar setup and my SAN is running Nexenta (Better than
RAID cards) and ZFS 6 SATA disks, 32gb ram and quad port gig nics
bonded.  For Dom0's I'm running IBM blades through a single 1g nic.

As far as latency, the thing is stellar, but I am limited to 1g of
throughput per Dom0.  What this means is that I have 4 or 5 VM's sharing
a 100MB connection for disk throughput (SATA is 150 to 600MB).  For
small reads and writes it works well, but is a bit slow on streaming
large file.  Don't get me wrong, the performance is better than
adequate, but the real bottleneck in my situation is the network.
Locally on the san, the throughput directly from the  disks would soak 2
1g interfaces and you throw in the ARC cache and the thing would soak a
10g network.

So if I were you, I would get some bonded interfaces on your san. 1 nic
is not going to cut it!

If you are going to use iSCSI, be aware that you cannot share a single
iSCSI target between pools. You must have a separate target for each
pool, so slice your disks appropriately.
In my situation, what I did was split the disk in half, 1 iSCSI target
for production and the rest to be shared via NFS (which can be easily
sliced for multiple pools).

I would recommend that you do some reading on ZFS, Nexenta, and Solaris
derivatives, and make a decision on what your storage is going to look
like after some due diligence.

Just my 2 penc........

What I am seeing is that 
-----Original Message-----
From: xen-users-bounces@xxxxxxxxxxxxx
[mailto:xen-users-bounces@xxxxxxxxxxxxx] On Behalf Of Ron Arts
Sent: Monday, April 30, 2012 4:37 AM
To: xen-users@xxxxxxxxxxxxx
Subject: [Xen-users] Recommendations setting up shared storage for a
build-farm for our software


I am trying to setup an XCP based buildfarm for our software.
It should build packages for various environments, bith 32 and 64 bits
For now we need 27 VM's of about 10Gbyte disksize each, they need 256MB
for the 32 bits VM's and 512MB for the 64 bit.
For the XCP-setup itself I have 6 Quad-Core 8GB servers available.
All network adapters are 1Gbit. And we are a CentOS shop.

My question is about the Shared Storage. I have at my disposal one
Quad-Core server, with 4GB RAM, and 12 x 1TB SATA disks on a 3-Ware
9500S 12 ports controller.

We will totally need 3 TB for the VM's and the Shared Storage combined.

How do I make the most efficient use of this storage? I am contemplating
NFS and iSCSI (and leaning towards the latter, because I did some tests,
and iSCSI was faster).
Because of the continuous compilation we need short seek times, so I
think striping is important.

So do I create one big striped RAID-1/10 disk on the 3-Ware of say 3 TB,
and hand that as one iSCSI LUN to XCP, or do I create 6 separate Raid-1
disks, and give that to XCP, and let it make a LVM out of that? Or am I
better off separating the Shared Storage for the VM's and the
compilation drives to different halves of the array?

I am aware this is not an easy question, but I'd appreciate any pointers
and insights.

Ron Arts

One IP <http://www.oneip.nl>

www.oneip.nl <http://www.oneip.nl>

Wattstraat 34
2171 TR Sassenheim
The Netherlands
Tel: +31(0) 85 1119126
Fax: +31(0) 85 1119199

<http://twitter.com/#%21/one_ip>RSS Feed <http://www.oneip.nl/feed/>

disclaimer <http://www.oneip.nl/one-ip/disclaimer-email>

Xen-users mailing list

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.