[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Xen domU physical partition disk I/O write throughput %50 slower

  • To: <xen-users@xxxxxxxxxxxxxxxxxxx>
  • From: "Hills, Steve" <Steve.Hills@xxxxxxxxxxxx>
  • Date: Tue, 28 Sep 2010 17:53:15 -0400
  • Delivery-date: Tue, 28 Sep 2010 14:54:39 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: ActfV42gwWk7v269SVi//JnkTIdDJg==
  • Thread-topic: Xen domU physical partition disk I/O write throughput %50 slower

dom0 is SLES 11 SP1, domU's are paravirtualized SLES 10/11. Local physical disk partition attached to domU via "phy:" (3 SAS disks in RAID1+).

Write throughput is %50 less than compared to write throughput to the same partition from domU. Read throughput is roughly equivalent. Tested via bonnie (e.g. with data size x2 phy mem) and "dd conv=fdatasync" where the partition is empty (no files).  The domU cpu usage is small during the tests.

I've played around with the "Xen best practices", various memory/cpu sizes with dom0/domU, elevator=noop, but there is always a %50 difference.

The question is how do I eliminate/find the bottleneck? I don't see any way to tweak the Xen split drivers/hypervisor in regards to block I/O.

Steve Hills
Teradata Corporation

Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.