[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2.2 / KVM / VirtualBox benchmark on Haswell



On Tue, 9 Jul 2013 15:56:51 +0000, Thanos Makatos <thanos.makatos@xxxxxxxxxx> wrote:
>>  Not sure whether anyone has seen this:
>>
>>

http://www.phoronix.com/scan.php?page=article&item=intel_haswell_virt
>> ua
>>  lization
>>
>> Some of the comments are interesting, but not really as negative as >> they used to be. In any case, it may make sense to have a quick look
>>
>>  Lars
>>
> They use PostMark for their disk I/O tests, which is an ancient
benchmark.

is that a good or a bad thing? If so, why?

IMO it's a bad thing because it's far from a representative
benchmark, which can lead to wrong conclusions when evaluation I/O
performance.

Ancient doesn't mean non-representative. A good file-system benchmark
is a tricky one to come up with because most FS-es are good at some
things and bad at others. If you really want to test the virtualization
overhead on FS I/O, the only sane way to test it is by putting the
FS on the host's RAM disk and testing from there. That should
expose the full extent of the overhead, subject to the same
caveat about different FS-es being better at different load types.

Personally I'm in favour of redneck-benchmarks that easily push
the whole stack to saturation point (e.g. highly parallel kernel
compile) since those cannot be cheated. But generically speaking,
the only way to get a worthwhile measure is to create a custom
benchmark that tests your specific application to saturation
point. Any generic/synthetic benchmark will provide results
that are almost certainly going to be misleading for any
specific real-world load you are planning to run on your
system.

For example, on a read-only MySQL load (read-only
because it simplified testing, no need to rebuild huge data
sets between runs, just drop all the caches), in custom application
performance test that I carried out for a client, ESX showed
a ~40% throughput degradation over bare metal (8 cores/server, 16
SQL threads cat-ing select-filtered general-log extracts, load
generator running in same VM). And the test machines (both
physical and virtual had enough RAM in them that they were both
only disk I/O bound for the first 2-3 minutes of the test (which
took the best part of an hour to complete); which goes to show
that disk I/O bottlenecks are good at covering up overheads
elsewhere.

Gordan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.