[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen IO performance issues



Hello,

On 2018-09-19 21:43, Hans van Kranenburg wrote:
On 09/19/2018 09:19 PM, marki wrote:
On 2018-09-19 20:35, Sarah Newman wrote:
On 09/14/2018 04:04 AM, marki wrote:

Hi,

We're having trouble with a dd "benchmark". Even though that probably
doesn't mean much since multiple concurrent jobs using a benckmark
like FIO for
example work ok, I'd like to understand where the bottleneck is / why
this behaves differently.

Now in a Xen DomU running kernel 4.4 it looks like the following and
speed is low / not what we're used to:

Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s
avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
dm-0              0.00     0.00    0.00  100.00     0.00    99.00 
2027.52     1.45   14.56    0.00   14.56  10.00 100.00
xvdb              0.00     0.00    0.00 2388.00     0.00    99.44   
85.28    11.74    4.92    0.00    4.92   0.42  99.20

# dd if=/dev/zero of=/u01/dd-test-file bs=32k count=250000
1376059392 bytes (1.4 GB, 1.3 GiB) copied, 7.09965 s, 194 MB/s

Interesting.

* Which Xen version are you using?

That particular version was XenServer 7.1 LTSR (Citrix). We also tried the newer current release 7.6, makes no difference.
Before you start screaming:
XS eval licenses do not contain any support so we can't ask them.
People in Citrix discussion forums are nice but don't seem to know details necessary to solve this.

* Which Linux kernel version is being used in the dom0?

In 7.1 it is "4.4.0+2".
In 7.6 that would be "4.4.0+10".

* Is this a PV, HVM or PVH guest?

In any case blkfront (and thus blkback) were being used (which seems to transfer data by that ring structure I mentioned and which explains the small block size albeit not necessarily the low queue depth).

* ...more details you can share?

Well, not much more except that we are talking about Suse Enterprise Linux 12 up to SP3 in the DomU here. We also tried RHEL 7.5 and the result (slow single-threaded writes) was the same. Reads are not blazingly fast either BTW.


Note the low queue depth on the LVM device and additionally the low
request size on the virtual disk.

(As in the ESXi VM there's an LVM layer inside the DomU but it
doesn't matter whether it's there or not.)


The above applies to HV + HVPVM modes using kernel 4.4 in the DomU.

Do you mean PV and PVHVM, instead?


Oups yes, in any case blkfront (and thus blkback) were being used.


What happens when you use a recent linux kernel in the guest, like 4.18?

I'd have to get back to you on that. However, as long as blkback stays the same I'm not sure what would happen. In any case we'd want to stick with the OSes that the XS people support, I'll have to find out if there are some with more recent kernels than SLES or RHEL.


Do things like using blk-mq make a difference here (just guessing around)?

Honestly I'd have to find out first what that is. I'll check that out and will get back to you.

Best regards,
Marki

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.