[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] 4.2.1: Poor write performance for DomU.
On 21/08/13 02:48, Konrad Rzeszutek Wilk wrote: On Mon, Mar 25, 2013 at 01:21:09PM +1100, Steven Haigh wrote:So, based on my tests yesterday, I decided to break the RAID6 and pull a drive out of it to test directly on the 2Tb drives in question. The array in question: # cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] md2 : active raid6 sdd[4] sdc[0] sde[1] sdf[5] 3907026688 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] [UUUU] # mdadm /dev/md2 --fail /dev/sdf mdadm: set /dev/sdf faulty in /dev/md2 # mdadm /dev/md2 --remove /dev/sdf mdadm: hot removed /dev/sdf from /dev/md2 So, all tests are to be done on /dev/sdf. Model Family: Seagate SV35 Device Model: ST2000VX000-9YW164 Serial Number: Z1E17C3X LU WWN Device Id: 5 000c50 04e1bc6f0 Firmware Version: CV13 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical From the Dom0: # dd if=/dev/zero of=/dev/sdf bs=1M count=4096 oflag=direct 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 30.7691 s, 140 MB/s Create a single partition on the drive, and format it with ext4: Disk /dev/sdf: 2000.4 GB, 2000398934016 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x98d8baaf Device Boot Start End Blocks Id System /dev/sdf1 2048 3907029167 1953513560 83 Linux Command (m for help): w # mkfs.ext4 -j /dev/sdf1 ...... Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done Mount it on the Dom0: # mount /dev/sdf1 /mnt/esata/ # cd /mnt/esata/ # bonnie++ -d . -u 0:0 .... Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP xenhost.lan.crc. 2G 425 94 133607 24 60544 12 973 95 209114 17 296.4 6 Latency 70971us 190ms 221ms 40369us 17657us 164ms So from the Dom0: 133Mb/sec write, 209Mb/sec read. Now, I'll attach the full disk to a DomU: # xm block-attach zeus.vm phy:/dev/sdf xvdc w And we'll test from the DomU. # dd if=/dev/zero of=/dev/xvdc bs=1M count=4096 oflag=direct 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 32.318 s, 133 MB/s Partition the same as in the Dom0 and create an ext4 filesystem on it: I notice something interesting here. In the Dom0, the device is seen as: Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes In the DomU, it is seen as: Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Not sure if this could be related - but continuing testing: Device Boot Start End Blocks Id System /dev/xvdc1 2048 3907029167 1953513560 83 Linux # mkfs.ext4 -j /dev/xvdc1 .... Allocating group tables: done Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done # mount /dev/xvdc1 /mnt/esata/ # cd /mnt/esata/ # bonnie++ -d . -u 0:0 .... Version 1.96 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP zeus.crc.id.au 2G 396 99 116530 23 50451 15 1035 99 176407 23 313.4 9 Latency 34615us 130ms 128ms 33316us 74401us 130ms So still... 116Mb/sec write, 176Mb/sec read to the physical device from the DomU. More than acceptable. It leaves me to wonder.... Could there be something in the Dom0 seeing the drives as 4096 byte sectors, but the DomU seeing it as 512 byte sectors cause an issue?There is certain overhead in it. I still have this in my mailbox so I am not sure whether this issue got ever resolved? I know that the indirect patches in Xen blkback and xen blkfront are meant to resolve some of these issues - by being able to carry a bigger payload. Did you ever try v3.11 kernel in both dom0 and domU? Thanks. Hi Konrad,I don't believe I ever fixed it - however I haven't tried kernel 3.11 in Dom0 OR DomU... I'll keep this in my inbox and try to build a 3.11 kernel for both in the near future for testing... -- Steven Haigh Email: netwiz@xxxxxxxxx Web: https://www.crc.id.au Phone: (03) 9001 6090 - 0412 935 897 Fax: (03) 8338 0299 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |