[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] iscsi vs nfs for xen VMs



On Sat, Jan 29, 2011 at 07:26:59PM +0100, Christian Zoffoli wrote:
> Il 29/01/2011 17:37, Pasi Kärkkäinen ha scritto:
> [cut]
> > No, it's not just smoke in the eyes.
> > It clearly shows ethernet and iSCSI can match and beat legacy FC.
> 
> SAS storages and also infiniband storages can beat both legacy FC and
> they cost less than a full 10G infrastructure
> 

Yep. 

> >> Noone published the hardware list they have used to reach such 
> >> performances.
> >>
> > 
> > Hardware configuration was published.
> 
> please provide a link of the full hw configuration
> 

1.25 Million IOPS benchmark:
http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million

http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/


> I cannot see anything about what you are saying having a look for
> example to:
> 
> http://download.intel.com/support/network/sb/inteliscsiwp.pdf
> 

That pdf is just generic marketing stuff.

The hardware setup is described here:
http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
and: http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/

Somewhere there was also PDF about that benchmark setup.

Microsoft presentation about iSCSI optimizations in 2008r2:
http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx


> >> First of all they have aggregated the perfomances of *10* targets (if
> >> the math is not changed 1 aggregator+10 targets == 11) and they have not
> >> said what kind of hard disk and how many hard disks they used to reach
> >> these performances.
> >>
> > 
> > Targets weren't the point of that test.
> > 
> > The point was to show single host *initiator* (=iSCSI client) 
> > can handle one million IOPS.
> 
> that's meaningless in this thread ...where are discussing about choosing
> the right storage infrastructure for a xen cluster
> 

This discussion started from the iSCSI vs. AoE performance differences..
So I just wanted to point out that iSCSI performance is definitely OK.

> when someone will release something real that everyone can adopt in his
> infrastructure with 1M IOPS I would be delighted to buy it
> 

That was very real, and you can buy the equipment and do the
same benchmark yourself.

> [cut]
> > In that test they used 10 targets, ie. 10 separate servers as targets,
> > and each had big RAM disk shared as iSCSI LUN.
> 
> see above ...it's meaningless in this thread
> 

Actually it just tells the StarWind iSCSI target they used is crap,
since they had to use 10x more targets than initiators to achieve 
the results ;)

> 
> >> In real life is very hard to reach high performance levels, for example:
> >> - 48x 2.5IN 15k disks in raid0 gives you ~8700 RW IOPS (in raid 0 the %
> >> of read doesn't impact on the results)
> >>
> > 
> > The point of that test was to show iSCSI protocol is NOT the bottleneck,
> > Ethernet is NOT the bottleneck, and iSCSI initiator (client)
> > is NOT the bottleneck.
> > 
> > The bottleneck is the storage server. And that's the reason
> > they used many *RAM disks* as the storage servers.
> 
> noone said something different ..we are discussing how to create the
> best clustered xen setup and in particular we are evaluating also the
> differences between all the technologies.
> 
> Nevertheless noone in the test results pointed how much CPU & co was
> wasted using this approach.
> 

In that benchmark 100% of the CPU was used (when at 1.3 million IOPS).

So when you scale IOPS to common workload numbers you'll notice
iSCSI doesn't cause much CPU usage..

Say, 12500 IOPS, will cause 1% cpu usage, when scaling linearly
from Intel+Microsoft results.

-- Pasi


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.