[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] follow UP: Xen + IPv6 + Netapp = NFS read problem


  • To: "XenU List" <xen-users@xxxxxxxxxxxxx>
  • From: G.Bakalarski@xxxxxxxxxx
  • Date: Thu, 13 Dec 2012 13:43:55 +0100 (CET)
  • Delivery-date: Thu, 13 Dec 2012 12:46:06 +0000
  • Importance: Normal
  • List-id: Xen user discussion <xen-users.lists.xen.org>

 Dear Xen'ers

Some follow-up on this topic. No success yet :/

Seems it's related with XEN by virtual network performance.
And (as of now) the main reason of slow transfer are probably
fragmented datagrams on IPv6 level (not TCP6 but IPv6!!!).
In our environment NetApp filer sends many fragmneted
IPv6 frames. When such frames come to bare metal or
Dom0 systems they are agregated in-time (at least at a 1GBit/s
speed) by machine such "physical". When receiver is domU with
virtual xen interface (bridged) it is too slow to assemby IPv6 frames
in-time, so transfer slows down.

When a sender is Linux machine no IPv6 packets are fragmented ...

IPv4 packets are NOT fragmented!!!

ToE does not change anything ...

We DID set 1500 MTU on all network devices (server, netapp filer, switches) ...

So maybe anyone knows how to force netapp filer not to fragment IPv6
packets? Or how to improve Xen network performace (but the first method
would be most welcome)???

Best regards,


Grzegorz

>
> Still no improvements with this issue.
> Thanks all who tried to help and all who sent trash
> rebukes.
>
> After some testings a problem has been redefined a little.
>
> Currently is is not NFS issue but network issue (tcp/udp).
>
> So status is the following:
>
> when we have *all* three in action, i.e."
>
> 1) Xen domU
> 2) IPv6 protocol
> 3) Netapp file server
>
> then we have very poor transfer rates.
>
> E.g.
> NFS - 5-8MBytes/s
> FTP - 11 MBytes/s
> HTTP - 3-4 MBytes/s
>
> (looks like 10MBit speed :-(  )
>
> If one of three elements (anyone) is missing we get full 1000Mbit/s speed.
> I talked to Netapp support and they suggested playing with  following
> TCP options:
> options ip.tcp.newreno.enable
> options ip.tcp.rfc3390.enable
> options ip.tcp.sack.enable
>
> But setting them on/off did not help much (or worsen performance sometimes).
>
> My question is if anyone knows about network issues between xen domU
> and FreeBSD machines (netapp file server is FreeBSD based).
> Or what should I look for (options/setting)  in xen, xen network interfaces,
> tcp stack to see what's going on ... ?
>
> Kind regards,
>
> GB
>
>



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.