[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] IO speed limited by size of IO request (for RBD driver)



On 30/04/2013 8:07 PM, Felipe Franciosi wrote:
I noticed you copied your results from "dd", but I didn't see any conclusions 
drawn from experiment.

Did I understand it wrong or now you have comparable performance on dom0 and 
domU when using DIRECT?

domU:
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s

dom0:
# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s


I think that if the performance differs when NOT using DIRECT, the issue must 
be related to the way your guest is flushing the cache. This must be generating 
a workload that doesn't perform well on Xen's PV protocol.

Exactly right. The direct write speeds are close enough to native to be perfect. The problem is when *not* using direct mode. You can see from the results when I don't pass the oflag=direct to dd, the write speeds are 112MB/sec from the Dom0 and 65MB/sec from the DomU.

As just about every other method to write on the DomU doesn't use direct, this becomes the normal write speeds.


Cheers,
Felipe

-----Original Message-----
From: Steven Haigh [mailto:netwiz@xxxxxxxxx]
Sent: 29 April 2013 20:48
To: Roger Pau Monne
Cc: Felipe Franciosi; xen-devel@xxxxxxxxxxxxx
Subject: Re: IO speed limited by size of IO request (for RBD driver)

On 30/04/2013 5:26 AM, Steven Haigh wrote:
On 29/04/2013 6:38 PM, Roger Pau Monné wrote:
Did you also copy xen-blkfront?

Dammit! No, no I didn't. I tried to just copy this back over to the
3.8.8 and 3.8.10 kernel versions, but it came up with too many errors
- so I just rebuilt/packages the checkout of your git based on 3.8.0-rc7.

It seems you are missing some pieces, you should see something like:

blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
indirect descriptors: enabled;

Now I'm running 3.8.0-rc7 from your git on both DomU and Dom0. In the
DomU, I now see:

blkfront: xvda: flush diskcache: enabled; persistent grants: enabled;
indirect descriptors: enabled;

  From what you say, this should be what I'd expect.

  From the DomU:
# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 32.9252 s, 65.2 MB/s

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
             0.23    0.00    9.61    0.00    0.46   89.70

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdd            1071.40  7914.87   67.28  479.18     4.40    32.64
138.82    17.45   31.65   2.00 109.36
sde            1151.72  7943.71   68.65  486.73     4.79    33.20
140.10    13.18   23.87   1.93 107.14
sdc            1123.34  7921.05   66.36  482.84     4.66    32.86
139.89     8.80   15.96   1.86 102.31
sdf            1091.53  7937.30   70.02  483.30     4.54    32.97
138.84    18.98   34.31   1.98 109.45
md2               0.00     0.00    0.00 1003.66     0.00    65.31
133.27     0.00    0.00   0.00   0.00

# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 25.4705 s, 84.3 MB/s

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
             0.22    0.00   10.94    0.00    0.22   88.62

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdd              20.35 13703.72    1.75  258.64     0.10    54.54
429.75     0.47    1.81   1.13  29.34
sde            1858.64 11655.36   61.05  199.56     7.51    46.36
423.36     1.54    5.89   3.27  85.27
sdc             142.45 11824.07    5.47  254.70     0.59    47.18
376.03     0.42    1.61   1.02  26.59
sdf             332.39 13489.72   11.38  248.80     1.35    53.72
433.47     1.06    4.10   2.50  65.16
md2               0.00     0.00    3.72  733.48     0.06    91.68
254.86     0.00    0.00   0.00   0.00

I just thought - I should probably include a baseline by mounting the same LV 
in the Dom0 and doing the exact same tests.

# dd if=/dev/zero of=output.zero bs=1M count=2048
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 19.1554 s, 112 MB/s

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
             0.00    0.00   23.18   76.60    0.22    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdd             139.07 14785.43   11.92  286.98     0.59    58.88
407.50     2.60    8.71   1.84  54.92
sde              83.44 14846.58    8.39  292.05     0.36    59.09
405.23     4.12   13.69   2.56  76.84
sdc              98.23 14828.04    9.93  289.18     0.42    58.84
405.73     2.55    8.45   1.75  52.43
sdf              77.04 14816.78    8.61  289.40     0.33    58.96
407.51     3.89   13.05   2.52  75.14
md2               0.00     0.00    0.00  973.51     0.00   116.72
245.55     0.00    0.00   0.00   0.00

# dd if=/dev/zero of=output.zero bs=1M count=2048 oflag=direct
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 24.8914 s, 86.3 MB/s

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
             0.00    0.00   12.22   87.58    0.21    0.00

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await  svctm  %util
sdd              32.09 12310.14    1.04  291.10     0.13    49.22
345.99     0.48    1.66   0.91  26.71
sde            1225.88  9931.88   39.54  224.84     4.94    39.70
345.81     1.20    4.53   2.44  64.55
sdc              19.25 11116.15    0.62  266.05     0.08    44.46
342.06     0.41    1.53   0.86  22.94
sdf            1206.63 11122.77   38.92  253.21     4.87    44.51
346.17     1.39    4.78   2.46  71.97
md2               0.00     0.00    0.00  634.37     0.00    79.30
256.00     0.00    0.00   0.00   0.00

This is running the same kernel - 3.8.0-rc7 from your git.

And also for the sake completeness, the Dom0 grub.conf:
title Scientific Linux (3.8.0-1.el6xen.x86_64)
          root (hd0,0)
          kernel /xen.gz dom0_mem=1024M cpufreq=xen dom0_max_vcpus=1 
dom0_vcpus_pin
          module /vmlinuz-3.8.0-1.el6xen.x86_64 ro root=/dev/vg_raid1/xenhost 
rd_LVM_LV=vg_raid1/xenhost rd_MD_UUID=afb92c19:b9b1e3ae:07af315d:738e38be 
rd_NO_LUKS rd_NO_DM
LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us 
crashkernel=auto quiet panic=5 i915.i915_enable_rc6=7
i915.i915_enable_fbc=1 i915.lvds_downclock=1 drm.vblankoffdelay=1
          module /initramfs-3.8.0-1.el6xen.x86_64.img

and the DomU config:
# cat /etc/xen/zeus.vm
name            = "zeus.vm"
memory          = 1024
vcpus           = 2
cpus            = "1-3"
disk            = [ 'phy:/dev/vg_raid1/zeus.vm,xvda,w' ,
'phy:/dev/md2,xvdb,w' ]
vif             = [ "mac=02:16:36:35:35:09, bridge=br203,
vifname=vm.zeus.203", "mac=10:16:36:35:35:09, bridge=br10, vifname=vm.zeus.10" ]
bootloader      = "pygrub"

on_poweroff     = 'destroy'
on_reboot       = 'restart'
on_crash        = 'restart'

All the tests are being done on /dev/md2 (from Dom0) presented as xvdb on the 
DomU.
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid6 sdd[5] sdc[4] sdf[1] sde[0]
        3906766592 blocks super 1.2 level 6, 128k chunk, algorithm 2 [4/4] 
[UUUU]

--
Steven Haigh

Email: netwiz@xxxxxxxxx
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299



--
Steven Haigh

Email: netwiz@xxxxxxxxx
Web: https://www.crc.id.au
Phone: (03) 9001 6090 - 0412 935 897
Fax: (03) 8338 0299

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.