[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] iscsi vs nfs for xen VMs



On Sat, Jan 29, 2011 at 08:46:52PM +0200, Pasi Kärkkäinen wrote:
> > 
> > please provide a link of the full hw configuration
> > 
> 
> 1.25 Million IOPS benchmark:
> http://communities.intel.com/community/wired/blog/2010/04/22/1-million-iops-how-about-125-million
> 
> http://blog.fosketts.net/2010/03/19/microsoft-intel-starwind-iscsi/
> 
> 
> > I cannot see anything about what you are saying having a look for
> > example to:
> > 
> > http://download.intel.com/support/network/sb/inteliscsiwp.pdf
> > 
> 
> That pdf is just generic marketing stuff.
> 
> The hardware setup is described here:
> http://communities.intel.com/community/wired/blog/2010/04/20/1-million-iop-article-explained
> and: http://gestaltit.com/featured/top/stephen/wirespeed-10-gb-iscsi/
> 
> Somewhere there was also PDF about that benchmark setup.
> 

Found it, it's here:
http://dlbmodigital.microsoft.com/ppt/TN-100114-JSchwartz_SMorgan_JPlawner-1032432956-FINAL.pdf


-- Pasi


> Microsoft presentation about iSCSI optimizations in 2008r2:
> http://download.microsoft.com/download/5/E/6/5E66B27B-988B-4F50-AF3A-C2FF1E62180F/COR-T586_WH08.pptx
> 
> 
> > >> First of all they have aggregated the perfomances of *10* targets (if
> > >> the math is not changed 1 aggregator+10 targets == 11) and they have not
> > >> said what kind of hard disk and how many hard disks they used to reach
> > >> these performances.
> > >>
> > > 
> > > Targets weren't the point of that test.
> > > 
> > > The point was to show single host *initiator* (=iSCSI client) 
> > > can handle one million IOPS.
> > 
> > that's meaningless in this thread ...where are discussing about choosing
> > the right storage infrastructure for a xen cluster
> > 
> 
> This discussion started from the iSCSI vs. AoE performance differences..
> So I just wanted to point out that iSCSI performance is definitely OK.
> 
> > when someone will release something real that everyone can adopt in his
> > infrastructure with 1M IOPS I would be delighted to buy it
> > 
> 
> That was very real, and you can buy the equipment and do the
> same benchmark yourself.
> 
> > [cut]
> > > In that test they used 10 targets, ie. 10 separate servers as targets,
> > > and each had big RAM disk shared as iSCSI LUN.
> > 
> > see above ...it's meaningless in this thread
> > 
> 
> Actually it just tells the StarWind iSCSI target they used is crap,
> since they had to use 10x more targets than initiators to achieve 
> the results ;)
> 
> > 
> > >> In real life is very hard to reach high performance levels, for example:
> > >> - 48x 2.5IN 15k disks in raid0 gives you ~8700 RW IOPS (in raid 0 the %
> > >> of read doesn't impact on the results)
> > >>
> > > 
> > > The point of that test was to show iSCSI protocol is NOT the bottleneck,
> > > Ethernet is NOT the bottleneck, and iSCSI initiator (client)
> > > is NOT the bottleneck.
> > > 
> > > The bottleneck is the storage server. And that's the reason
> > > they used many *RAM disks* as the storage servers.
> > 
> > noone said something different ..we are discussing how to create the
> > best clustered xen setup and in particular we are evaluating also the
> > differences between all the technologies.
> > 
> > Nevertheless noone in the test results pointed how much CPU & co was
> > wasted using this approach.
> > 
> 
> In that benchmark 100% of the CPU was used (when at 1.3 million IOPS).
> 
> So when you scale IOPS to common workload numbers you'll notice
> iSCSI doesn't cause much CPU usage..
> 
> Say, 12500 IOPS, will cause 1% cpu usage, when scaling linearly
> from Intel+Microsoft results.
> 
> -- Pasi
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.