[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [ANNOUNCE] virtbench now has xen support



On Thu, 2007-05-24 at 17:37 +0200, Jan Michael wrote:
> I didn't had anything to do with benchmarking in the past, and  
> especially not with virtualization benchmarks, so there are again  
> some questions related to the results of the benchmarking test:
> 
>       1. What can I read out of every single value which is listed above?  
> Can you please give a short explenation?
>       2. What are the unit(s) of the measured values?

Hi Jan!

Each one is in nanoseconds, shorter is better.  All of them are run on
processes within one randomly-chosen domU of the four (some are
inter-guest test which run on two domUs)

> > Time for one context switch via pipe: 8734 (8640 - 9575)

Two processes within the domU, one is doing a read() waiting for the
other to do a write(), then vice versa

> > Time for one Copy-on-Write fault: 5898 (5814 - 8963)

This measures the time for a page marked readonly to become writable
when the guest writes to it.

> > Time to exec client once: 573046 (565921 - 615390)

This measures the client process execing itself.

> > Time for one fork/exit/wait: 347687 (345750 - 362250)

This measure the client process fork()ing, the child exiting, and the
parent waiting for it.

> > Time to send 4 MB from host: 55785000 (27069625 - 315191500)

This measures network speed: 4MB TCP transfer from the virtbench process
(dom0) to the client (domU).

> > Time for one int-0x80 syscall: 370 (370 - 403)
> > Time for one syscall via libc: 376 (376 - 377)

These are the time taken to do a getppid() system call.

> > Time to walk linear 64 MB: 1790875 (1711750 - 3332875)
> > Time to walk random 64 MB: 2254500 (2246000 - 2266250)

Memory walking.

> > Time for one outb PIO operation: 721 (717 - 733)

One io operation, roughly the time taken for a hypervisor entry & exit.

> > DISABLED pte-update: glibc version is too old

This test measures the time to update two page table entries, but
required mremap() which is only in modern glibcs.

> > Time to read from disk (256 kB): 18810406 (14266718 - 24088906)

Read 256k from the block device.

> > Time for one disk read: 56343 (38593 - 201718)

Read a single block from the block device (ie. latency).

> > DISABLED vmcall: not a VT guest
> > DISABLED vmmcall: not an SVM guest

These only apply to fully-virtualized guests.

> > Time to send 4 MB between guests: 94326750 (79872250 - 729306500)

domU <-> domU 4MB TCP write.

> > Time for inter-guest pingpong: 130316 (119722 - 186511)

domU <-> domU TCP latency.

> > Time to sendfile 4 MB between guests: 134768000 (86528000 - 417646000)

domU <-> domU 4MB TCP write using sendfile().

> > Time to receive 1000 1k UDPs between guests: 26010000 (23384000 -  
> > 66784000)

Sending 1000 UDP packets from domU <-> domU.  This benchmark is horribly
unreliable and should probably be removed.

>       3. What is a good value and what is a bad value? On what does these  
> measures depend on - hardware or software or both?

Both... run "virtbench local" on the same hardware on a normal Linux
kernel to see what native results are.  This is really the target to aim
for.

>       4. If I get a certain value like this one: Time for one context  
> switch via pipe: 8734 (8640 - 9575). What can I do to improve/tune  
> the performance or the values?

That would be Xen-specific, I'm not entirely sure how much that can be
improved.

>       5. I googled through the web to find any results to compare with  
> mine, but I couldn't find anything. Do you have some?

I do not release benchmark numbers myself; they're quite dependent on
particular hardware, and also virtualization technology is moving
rapidly enough to make them quite obsolete.  virtbench is mainly useful
for spotting regressions, measuring code optimizations and explaining
the results of higher-level benchmarks.

>       6. In the README file is said that virtbench contains "low level"  
> benchmarks. What do you consider as a "high level" benchmark?

Things like: kernbench, SDET, Spec, etc.

I hope that helps,
Rusty.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.