[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] My future plan


  • To: "Jonathan Tripathy" <jonnyt@xxxxxxxxxxx>,<Xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Robert Dunkley" <Robert@xxxxxxxxx>
  • Date: Tue, 8 Jun 2010 15:56:00 +0100
  • Cc:
  • Delivery-date: Tue, 08 Jun 2010 07:57:59 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AcsHCes2uM5QaNuCTjC2th1DBTX8rwADQycQAABS0VkAADDuUA==
  • Thread-topic: [Xen-users] My future plan

Hi Jonathan,

 

The NAS is using good components, make sure you get IPMI option if this is going in a rack more than 5 minutes away from where you work. Ask Broadberry if they can supply the newer SAS 6G expander version of that chassis and the newer 9260-4I 6G raid card (I’m pretty sure it a Supermicro approved card for that chassis), with 16 drives 6G SAS may remove a potential bottleneck to the expander.  Also, consider 15K SAS for your high IO database and mailservers, a mix of 15K SAS and 7K SATA arrays might be appropriate.

 

Anything but LSI cards often have issues with the LSI based expanders in those Supermicro chassis, Areca do work with the SAS1 expander as long as SAFTE is disabled but considering the expander I think LSI is the only advisable card brand.

 

Any reason you aren’t considering 1U servers with integrated Intel NIcs for nodes? Often the best band per buck for nodes is with 1U Dual Xeon E55XX quadcore or the new Opteron Octal/Dodeca core systems.

 

Rob

 

 

 

From: Jonathan Tripathy [mailto:jonnyt@xxxxxxxxxxx]
Sent: 08 June 2010 15:38
To: Robert Dunkley; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

 

Hi Rob,

 

Do you have any links or anything for cards that you suggest? I'm just a start-up to low cost is very much a good thing here :) But then again, so is having my cake and eating it as well!!

The RAID card what was came standard with this server that I was looking at: http://www.broadberry.co.uk/iscsi-san-nas-storage-servers/cyberstore-316s-wss

 

That's a fantastic idea about the PXE booting! The only thing though, is that Dell supply their server with a minimum of a single HDD as standard, so there would be no cost saving there. And also, all the servers would have to be the same.

 

My idea is that if this was to work out properly, I would get servers better than R210, as these are limited to 16GB of RAM max..

 

Thanks

 

Jonathan

 


From: Robert Dunkley [mailto:Robert@xxxxxxxxx]
Sent: Tue 08/06/2010 15:36
To: Jonathan Tripathy; Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-users] My future plan

 

Hi Jonathan,

 

 

Might be worth considering a different raid card, even with simple raid 1 I did not get proper raid 1 random read interleaving performance with an LSI 1068 based controller (Assuming the 1078 is very similar), an IOP based Areca card behaved properly (Only 30% improvement over single drive with LSI but 80% better with Areca, simple Bonnie testing). I was using Centos 5.2 at the time (Integrated drivers).

 

If you are feeling brave maybe a PXE boot could work to save the need for any system drives on the nodes.

 

 

Rob

 

From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Jonathan Tripathy
Sent: 08 June 2010 13:56
To: Xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] My future plan

 

My future plan currently looks like this for my VPS hosting solution, so any feedback would be appreciated:

 

Each Node:

Dell R210 Intel X3430 Quad Core 8GB RAM

Intel PT 1Gbps Server Dual Port NIC using linux "bonding"

Small pair of HDDs for OS (Probably in RAID1)

Each node will run about 10 - 15 customer guests

 

 

Storage Server:

Some Intel Quad Core Chip

2GB RAM (Maybe more?)

LSI 8704EM2 RAID Controller (Think this controller does 3 Gbps)

Battery backup for the above RAID controller

4 X RAID10 Arrays (4 X 1.5TB disks per array, 16 disks in total)

Each RAID10 array will connect to 2 nodes (8 nodes per storage server)

Intel PT 1Gbps Quad port NIC using Linux bonding

Exposes 8 X 1.5GB iSCSI targets (each node will use one of these)

 

HP Procurve 1800-24G switch to create 1 X 4 port trunk (for storage server), and 8 X 2 port trunk (for the nodes)

 

What you think? Any tips?

 

Thanks

 

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

 

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support.

 

 SAQ Group

 

ISPA Member

The SAQ Group

Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
Company Number: 06481952

 

http://www.saqnet.co.uk AS29219

SAQ Group Delivers high quality, honestly priced communication and I.T. services to UK Business.

Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : Backups : Managed Networks : Remote Support.

 

 SAQ Group

 

ISPA Member

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.