[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] NFS Storage Performance on XCP


  • To: xen-api@xxxxxxxxxxxxx
  • From: George Shuklin <george.shuklin@xxxxxxxxx>
  • Date: Mon, 03 Sep 2012 12:35:22 +0400
  • Delivery-date: Mon, 03 Sep 2012 08:35:36 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

This is exact problem I've talking about two years.

Problem is splited to few issues:

1) 1st write after snapshot or to the new disk.

Because of COW table with HUGE page size (2Mb) it cause overwrite large amount of data at 1st writing (and reading old data if this is snapshot/cloned disk).
This is barely acceptable, next writing will be faster.

2) Random huge read operations during 2nd and following writing. I don't know reason, but it happens. IO monitoring on storage show large (about 70% of writing operations) read io during pure write to VM disk (without any filesystem, just random writes to block device). It happens sometime and I can't pinpoint exact conditions. I have no explanation for this.

3) Relatively slow write operations during normal workload. My tests with ramdisk exported via iscsi shows funny numbers: write operations from VM to RAW disk (by very ugly way, but it still possible to create raw images on LVM) differ about 70%. Means IO on raw disk ~70% slower than IO on VHD after write initialization.

I still thinking VHD is very bad and microsoft influenced technology... 2Mb for COW is horrible. 2Tb limit for disk is bad. Addition read io from bitmaps is not good. Snapshots may be nice stuff, but pesky 'base copy' images broke all model of vdi relationships.

My opinion - VHD is one of the problematic points for XCP.

03.09.2012 12:09, ND KK пишет:
Hi All,
I have a problem on NFS storage performance in XCP 1.1
I have run several tests, with commands like this. I average over three tests with the same command:
Write: dd if=/dev/zero of=./test1 bs=4M coubt=1024 oflag=direct
Read: dd if=./test1 of=/dev/null bs=4M iflag=direct

Case 1 :
Storage Server - CentOS 5.6:
2 * 2TB SATAII  7.2Krpm HDD in RAID1 for System
12 * 2TB SATAII  7.2Krpm HDD in RAID10 for NFS
10 Gigabit Ethernet (I have tested it by iperf, throughput = 971 MB/s)
Jumbo frames at 9000 bytes are enabled  (I have tested, this's work) 

The performance on storage (local)  R:650(MB/s)  W:478(MB/s)
The performance on XCP Host (dom0)  R:382(MB/s)  W:466(MB/s)
The performance on VM - CentOS (domU)  R:261(MB/s)  W:364(MB/s)

Case 2 :
Storage Server - Nexenta:
2 * 2TB SATAII  7.2Krpm HDD in RAID1 for System
5 * 2TB SATAII  7.2Krpm HDD in RAID5 for NFS
10 Gigabit Ethernet
Jumbo frames at 9000 bytes are enabled 

The performance on storage (local)  R:3.2(GB/s)  W:2.6(GB/s)  <- cache, can't use dd iflag/oflag=direct on Nexenta
The performance on XCP Host (dom0)  R:753(MB/s)  W:824(MB/s)
The performance on  VM - CentOS  (domU)  R:406(MB/s)  W:442(MB/s)

Why the performance  on VM slow than Host(dom0) so much?
Is't blktap2 problem? or nfs mount options? (If it's NFS mount options, why domU slow than dom0 so much?)
Thanks!


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.