[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [linux-linus test] 25478: regressions - FAIL
On Mon, 2014-03-17 at 12:01 +0000, David Vrabel wrote: > On 14/03/14 17:07, Ian Campbell wrote: > > On Fri, 2014-03-14 at 16:42 +0000, xen.org wrote: > >> flight 25478 linux-linus real [real] > >> http://www.chiark.greenend.org.uk/~xensrcts/logs/25478/ > >> > >> Regressions :-( > >> > >> Tests which did not succeed and are blocking, > >> including tests which could not be run: > >> test-amd64-i386-pair 17 guest-migrate/src_host/dst_host fail REGR. vs. > >> 12557 > > > > Is anyone looking at these? Apparently this hasn't passed for 23 months: > > http://xenbits.xen.org/gitweb/?p=linux-pvops.git;a=shortlog;h=refs/heads/tested/linux-linus > > > > Looking through the recent failures this migration one seems quite > > common but there seem to be a few others, search for "[linux-linux > > test]" in http://lists.xen.org/archives/html/xen-devel/2014-03/ for some > > examples. > > skb compound pages result in too much SWIOTLB usage. In XenServer we > have the following to disable it. > > net/core: Order-3 frag allocator causes SWIOTLB bouncing under Xen Ah, I remember this issue (but not this symptom). I thought it had been fixed but obviously not. (my own skanky patch was rejected but I thought it had been solved some other way). The host which is suffering this does have an IOMMU, so Zoltan's third possibility for a proper fix ought to be plausible, but I suppose dom0 needs to somehow know whether or not to bounce the pages. > > From: Zoltan Kiss <zoltan.kiss@xxxxxxxxxx> > > THIS PATCH IS NOT INTENDED TO BE UPSTREAMED, IT HAS ONLY INFORMING PURPOSES! > > I've noticed a performance regression with upstream kernels when used as > Dom0 > under Xen. The classic kernel can utilize the whole bandwidth of a 10G NIC > (ca. 9.3 Gbps), but upstream can reach only ca. 7 Gbps. I found that it > happens because SWIOTLB has to do double buffering. The per task frag > allocator introduced in 5640f7 creates 32 kb frags, which are not contiguous > in mfn space. > This patch provides a workaround by going back to the old way. The possible > ideas came up to solve this: > > * make sure Dom0 memory is contiguous: it sounds trivial, but doesn't > work with > driver domains, and there are lots of situations where this is not possible. > * use PVH Dom0: so we will have IOMMU. In the future sometime. > * use IOMMU with PV Dom0: this seems to happen earlier. > > Signed-off-by: Zoltan Kiss <zoltan.kiss@xxxxxxxxxx> > > diff --git a/net/core/sock.c b/net/core/sock.c > index d6d024c..44614a5 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -1791,7 +1791,7 @@ struct sk_buff *sock_alloc_send_skb(struct sock > *sk, unsigned long size, > EXPORT_SYMBOL(sock_alloc_send_skb); > > /* On 32bit arches, an skb frag is limited to 2^15 */ > -#define SKB_FRAG_PAGE_ORDER get_order(32768) > +#define SKB_FRAG_PAGE_ORDER get_order(4096) > > bool sk_page_frag_refill(struct sock *sk, struct page_frag *pfrag) > { _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |