[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Optimizing I/O


  • To: Robert Dunkley <Robert@xxxxxxxxx>
  • From: Heiko <rupertt@xxxxxxxxx>
  • Date: Fri, 23 Jan 2009 15:24:42 +0100
  • Cc: Craig Herring <craigeherring@xxxxxxxxx>, Rudi Ahlers <rudiahlers@xxxxxxxxx>, xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 23 Jan 2009 06:25:28 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=AVdVRltFE2Iuf8SVPmrvgvB9h23uHnfMGkzP0SfktA5UM/qwBLJUPtUU9FHYMXXShV cj5DfopnOVczuQx8NCsKAqQu3tTlc61I9RkD5GsFyrJh74l1YO16qEBbt36jz+wygh2Z 5PM+RdR9ed/hAvp2RpXZDdLErcf8cjXZjwmmU=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Fri, Jan 23, 2009 at 2:13 PM, Robert Dunkley <Robert@xxxxxxxxx> wrote:
> Some very good advice below. If you have the budget for a decent san
> type box for storage then Infiniband + RDMA + ISCSI + DRBD on two
> mirrored boxes should allow for excellent performance and easy failover.
>
Hello,

we have 7 Servers with about 30 VM spread accross these servers.
Would a iSCSI device with 1Gb/1GB Interface enaough to hold up to 100VM?
Or will the IO be not enough? Most VM do serve websites, but some have
heavy mysql usage.

thx

>
> Also, I cannot stress enough the importance of a decent raid card,
> spread the VMs across multiple raid 1 arrays and a decent SAS card
> should let you mix and match SAS and SATA drive arrays which is often
> convenient.
>
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Rudi Ahlers
> Sent: 23 January 2009 12:53
> To: Craig Herring
> Cc: xen-users
> Subject: Re: [Xen-users] Optimizing I/O
>
> If possible, add as many disks to the machine as it can take, and
> spread the VM's out across the disks / partitions.
>
> Or, if you can, setup RAID 10 to help load the IO of different data
> onto different disks / controllers. Don't use IDE, and try and get the
> fastest disks for your budget. SATA II isn't that much more expensive
> than IDE. Or if you can afford it, and the mobo can handle it, get
> SCSI or SAS drives.
>
> On Fri, Jan 23, 2009 at 7:03 AM, Craig Herring <craigeherring@xxxxxxxxx>
> wrote:
>> I've found the biggest issue with virtualization is disk I/O. NIC I/O
> I have
>> not seen much of an issue especially if you are using a GB nic. If you
> are
>> having issues with NIC IO this would indicate you are possibly
> approaching
>> 120MB/sec. Although use separate NICs for your different networks or
> bond
>> them with ALB can help. If you are using NFS or iSCSI storage use
> different
>> NICs than your guest networks. Also a good quality switch can assist
> as
>> well, even sometimes overlooked. A good quality HP 1800 series switch
> isn't
>> expensive at all. I've seen some tests that suggest Intel NICs have
> less
>> latency, almost half, than most others.
>>
>> In most situations I find running a RAID 1 / RAID 10 and using less
> than 5
>> VMs per partition is a good rule of thumb to stay away from disk
> contention
>> issues. Also using iSCSI and DRBD can assist in speed as this would
> dedicate
>> a server to handling disk IO. These services can also use much of the
> ram as
>> cache. Stay away from the *fake* RAID stuff or even the cheap RAID
>> controllers. Buy the better later gen 3WARE, LSI, Areca controllers or
> just
>> use software RAID. Also format the partition XFS and set the noatime
> flag.
>> The WD RE3/2/Raptor drives are incredibly fast especially in a RAID 1.
>>
>> -Craig
>>
>> lists@xxxxxxxxxxxx wrote:
>>>
>>> My question was really meant to ask about I/O, in as far as file
>>> transferring between main host and network for host and guests but
> anything
>>> is good.
>>> Just trying to pull all my questions and notes together so that I can
> get
>>> on this in a week or two and it's good to see folks sharing their
> ideas,
>>> methods etc.
>>>
>>> So for example, on a system that's pretty much RPM based, what tweaks
> can
>>> someone make to the various configurations files which would greatly
> help
>>> overall network I/O.
>>>
>>> Mike
>>>
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>>>
>>
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>>
>
>
>
> --
>
> Kind Regards
> Rudi Ahlers
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>
> The SAQ Group
>
> Registered Office: 18 Chapel Street, Petersfield, Hampshire GU32 3DZ
> SAQ is the trading name of SEMTEC Limited. Registered in England & Wales
> Company Number: 06481952
>
> http://www.saqnet.co.uk AS29219
>
> SAQ Group Delivers high quality, honestly priced communication and I.T. 
> services to UK Business.
>
> Broadband : Domains : Email : Hosting : CoLo : Servers : Racks : Transit : 
> Backups : Managed Networks : Remote Support.
>
> ISPA Member
>
> Find us in http://www.thebestof.co.uk/petersfield
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
>

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.