[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] File-based domU - Slow storage write since xen 4.8



On Thu, Jul 20, 2017 at 05:12:33PM +0200, Benoit Depail wrote:
> ##### Dom0 with a kernel >= 4.4.2, or a custom 4.4.1 including commit
> d2081cfe624b5decaaf68088ca256ed1b140672c
> 
> On the dom0:
> # dd if=/dev/zero of=/mnt/d-anb-nab2.img bs=4M count=1280
> 1280+0 records in
> 1280+0 records out
> 5368709120 bytes (5.4 GB) copied, 46.3234 s, 116 MB/s
> 
> # dd if=/dev/zero of=/dev/loop1 bs=4M count=1280
> 1280+0 records in
> 1280+0 records out
> 5368709120 bytes (5.4 GB) copied, 44.948 s, 119 MB/s
> 
> On the domU:
> # dd if=/dev/zero of=/dev/xvdb bs=4M count=1280
> 1280+0 records in
> 1280+0 records out
> 5368709120 bytes (5.4 GB) copied, 102.943 s, 52.2 MB/s
> 
> 
> For completeness sake, I'll put my findings below:
> 
> > I used git bisect on the linux-stable source tree to build (a lot of)
> > tests kernels, and was able to find this commit as the first one
> > introducing the regression :
> >
> > d2081cfe624b5decaaf68088ca256ed1b140672c is the first bad commit
> > commit d2081cfe624b5decaaf68088ca256ed1b140672c
> > Author: Keith Busch <keith.busch@xxxxxxxxx>
> > Date:   Tue Jan 12 15:08:39 2016 -0700
> >
> >     block: split bios to max possible length
> >
> > In term of kernel version, the first one showing bad performances in my
> > case is 4.4.2 (with, obviously, 4.4.1 working as expected).
> >
> > Interestingly, this commit is an improvement of
> > d3805611130af9b911e908af9f67a3f64f4f0914, which is present in 4.4-rc8
> > but do not show any performance issue in our case.
> >
> > I can also confirm that this issue is still present in the latest
> > unstable kernel (we tested 4.13-rc1).

I admit I don't have a good working knowledge of xen block. I've spent a
few minutes looking over the code, and the best I can tell is that this
patch will build requests up to 88 blocks ((11 segments * 4k) / 512)
that would have been fewer blocks without this patch. The resulting bio
vectors may also have offsets that weren't there before.

I'm not sure what the significance is. Perhaps it's causing the number
of grants to exceed BLKIF_MAX_SEGMENTS_PER_REQUEST, which appears to take
a less optimal path in the code, but I'm not sure why that would only
harm writes.

As a test, could you throttle the xvdb queue's max_sectors_kb? If I
followed xen-blkfront correctly, the default should have it set to 44.
Try setting it to 40.

  echo 40 > /sys/block/xvdb/queue/max_sectors_kb


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.