[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen IO performance issues


  • To: marki <list+xenusers@xxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxxx
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Fri, 28 Sep 2018 10:46:48 +0200
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Delivery-date: Fri, 28 Sep 2018 08:47:58 +0000
  • List-id: Xen user discussion <xen-users.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 20/09/2018 11:49, marki wrote:
> Hello,
> 
> On 2018-09-19 21:43, Hans van Kranenburg wrote:
>> On 09/19/2018 09:19 PM, marki wrote:
>>> On 2018-09-19 20:35, Sarah Newman wrote:
>>>> On 09/14/2018 04:04 AM, marki wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> We're having trouble with a dd "benchmark". Even though that probably
>>>>> doesn't mean much since multiple concurrent jobs using a benckmark
>>>>> like FIO for
>>>>> example work ok, I'd like to understand where the bottleneck is / why
>>>>> this behaves differently.
>>>>>
>>>>> Now in a Xen DomU running kernel 4.4 it looks like the following and
>>>>> speed is low / not what we're used to:
>>>>>
>>>>> Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
>>>>> avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
>>>>> dm-0              0.00     0.00    0.00  100.00     0.00    99.00 
>>>>> 2027.52     1.45   14.56    0.00   14.56  10.00 100.00
>>>>> xvdb              0.00     0.00    0.00 2388.00     0.00    99.44   
>>>>> 85.28    11.74    4.92    0.00    4.92   0.42  99.20
>>>>>
>>>>> # dd if=/dev/zero of=/u01/dd-test-file bs=32k count=250000
>>>>> 1376059392 bytes (1.4 GB, 1.3 GiB) copied, 7.09965 s, 194 MB/s
>>
>> Interesting.
>>
>> * Which Xen version are you using?
> 
> That particular version was XenServer 7.1 LTSR (Citrix). We also tried
> the newer current release 7.6, makes no difference.
> Before you start screaming:
> XS eval licenses do not contain any support so we can't ask them.
> People in Citrix discussion forums are nice but don't seem to know
> details necessary to solve this.
> 
>> * Which Linux kernel version is being used in the dom0?
> 
> In 7.1 it is "4.4.0+2".
> In 7.6 that would be "4.4.0+10".
> 
>> * Is this a PV, HVM or PVH guest?
> 
> In any case blkfront (and thus blkback) were being used (which seems to
> transfer data by that ring structure I mentioned and which explains the
> small block size albeit not necessarily the low queue depth).
> 
>> * ...more details you can share?
> 
> Well, not much more except that we are talking about Suse Enterprise
> Linux 12 up to SP3 in the DomU here. We also tried RHEL 7.5 and the
> result (slow single-threaded writes) was the same. Reads are not
> blazingly fast either BTW.
> 
>>
>>>>> Note the low queue depth on the LVM device and additionally the low
>>>>> request size on the virtual disk.
>>>>>
>>>>> (As in the ESXi VM there's an LVM layer inside the DomU but it
>>>>> doesn't matter whether it's there or not.)
>>>>>
>>>>>
>>>>> The above applies to HV + HVPVM modes using kernel 4.4 in the DomU.
>>
>> Do you mean PV and PVHVM, instead?
>>
> 
> Oups yes, in any case blkfront (and thus blkback) were being used.
> 
>>
>> What happens when you use a recent linux kernel in the guest, like 4.18?
> 
> I'd have to get back to you on that. However, as long as blkback stays
> the same I'm not sure what would happen.
> In any case we'd want to stick with the OSes that the XS people support,
> I'll have to find out if there are some with more recent kernels than
> SLES or RHEL.

I have just done a small test for other purposes requiring to do reads
in a domU using blkfront/blkback. The data was cached in dom0, so the
only limiting factor was cpu/memory speed and the block ring interface
of Xen. I was able to transfer 1.8 GB/s on a laptop with a dual core
i7-4600M CPU @ 2.90GHz.

So I don't think the ring buffer interface is a real issue here.

Kernels (in domU and dom0) are 4.19-rc5, Xen is 4.12-unstable.

Using a standard SLE12-SP2 domU (kernel 4.4.121) with the same dom0
as in the test before returned the same result.


Juergen

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.