[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Aoe or iScsi???



On Thursday 08 July 2010 14:18:00 Adi Kriegisch wrote:
> Hi!
> 
> [SNIP]
> 
> > >  latency.) 2. measure the IOPS you get. I personally prefer using
> > > FIO[3] which is readily available in Debian. FIO is fully configurable;
> > > there are however some reasonable examples which you might use:
> > >    /usr/share/doc/fio/examples/iometer-file-access-server mimiks a
> > > typical file server workload with 80% read. The IOPS calculator
> > > above[1] is only
> 
> [SNAP]
> 
> > I have been looking at FIO, but what jobfile do you use that you find
> > optimal to test network storage for Xen?
> 
> Actually this is a very hard to answer question! ;-)
> Short answer: I use iometer-file-access-server with (1) 80% (2) 100% (3) 0%
> read. Then I have a feeling for what I may expect from the hardware...
> 
> Long answer:
> Benchmarking is actually about lying -- depending on who is doing those
> benchmarks. There is an excellent presentation about benchmarking by Greg
> Smith of Postgres fame[1] stating some universal truths about how to
> conduct benchmarks.
> The most important part is to know what you need: The more details you have
> the less you're lying or the less you're in danger of being lied to.
> In benchmarking terms this means defining an I/O profile. To get that right
> you need to monitor your existing infrastructure (eg by collecting the
> output of 'iostat') and conducting the very same benchmarks you're doing on
> your new hardware on the old one as well.
> One of my servers reports the following on iostat for a disk containing
> some mailboxes:
> Device:            tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
> sda              45.40       356.65       158.90 2005866781  893707736
> 
> This means: Since power-on (65 days in this case) the server does 45 IOPS
> on average per second. Now it is up to me to know if this server does about
> the same amount of work 24/7 or if it is only busy during the day (which
> means that you need to calculate two or three times the IOPS for 12 or 8
> hours being really busy. (This is a file server scenario in an average
> office. ie: 45*(24/12) or 45*(24/8))
> The next thing I get is the ratio between reads and writes:
> 1% .............. (2005866781 + 893707736)/100
> read percent .... 2005866781/2005866781.0/percent = 69.2 %
> write percent ... 893707736.0/percent = 30.8 %
> 
> There is one more very important thing in the output of iostat:
> (this is actually from a different machine than the above) the average cpu
> usage:
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>           10.35    2.44    5.69    8.63    0.00   72.89
> 
> This shows that the system spends more than 8% of its time on waiting for
> the completion of outstanding I/Os (which is a bad thing ;-) ).
> 
> With those numbers one is able to get a better understanding of what
> servers are doing. Using that information gives the ability to create a
> set of FIO jobfiles that roughly describe the workload expected and shows
> wether a storage backend (and in parts the rest of the systems) is able to
> handle that.
> When you're doing benchmarks from time to time you'll get a feeling for
> what a storage backend can handle when looking at FIO's results. Using your
> own jobfiles with more jobs running in parallel (numjobs=..), using
> different block sizes (either blocksize=Xk where x is 4,8,... or mixing
> blocksizes as done in iometer-file-access-server) and finding a balance
> between read and write transactions you might see more clearly if a storage
> system can handla your specific workload.
> 
> Do those benchmarks in your own setup. Do not let someone else do them for
> you. In case of network storage, be it iSCSI, AoE, NFS or whatever, refrain
> running the benchmarks on the storage system itself: The results will not
> reflect the real throughput of the system, numbers will almost always be
> higher!
> 
> Uh, quite a lot of text again... ;) Hope this helps! Feedback and
> discussion apprechiated...
> 
> -- Adi
> 
> [1]
>  http://www.pgcon.org/2009/schedule/attachments/123_pg-benchmarking-2.pdf
> 


Adi,

most useful and elaborate info, thx a mille!!!

B.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.