[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] Odd blkdev throughput results
Hi. I've been running a couple of benchmarks on a Xen-3.0 installation lately. Part of those compared SMP and CMP configurations on a 2x2 Intel Woodcrest (i.e. two sockets). Tests were all performed between a UP dom0 (on core 0) and a UP domU and pinning the domU VCPU to core 1 (processor 0) or 3 (processor 1). Switching from SMP to CMP, netperf -tTCP_TREAM gets me 1686.64 vs. 2673.20 Mbit/s. Lesser IPI latency, shared caches, all as one should expect, I believe. Now, trying that for block I/O may sound strange but can be done: created a 3 GB ramdisk on dom0 and fed that to domU. peak with 'hdparm -t' is at 759.37 MB/s on SMP. The fun (for me, fun is probably a personal thing) part is that throughput is higher than with TCP. May be due to the block layer being much thinner than TCP/IP networking, or the fact that transfers utilize the whole 4KB page size for sequential reads. Possibly some of both, I didn't try. This is not my question. What strikes me is that for the blkdev interface, the CMP setup is 13% *slower* than SMP, at 661.99 MB/s. Now, any ideas? I'm mildly familiar with both netback and blkback, and I'd never expected something like that. Any hint appreciated. Thanks, Daniel -- Daniel Stodden LRR - Lehrstuhl fÃr Rechnertechnik und Rechnerorganisation Institut fÃr Informatik der TU MÃnchen D-85748 Garching http://www.lrr.in.tum.de/~stodden mailto:stodden@xxxxxxxxxx PGP Fingerprint: F5A4 1575 4C56 E26A 0B33 3D80 457E 82AE B0D8 735B _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |