[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Another GPLPV pre-release 0.9.11-pre20




"James Harper" <james.harper@xxxxxxxxxxxxxxxx> wrote on 05.11.2008 11:49:18:

> > Here is one example:
> > 10:22:23.172581 IP (tos 0x0, ttl 128, id 18177, offset 0, flags [DF],
> > proto: TCP (6), length: 74) 213.250.XX.XX.ms-wbt-server >
> > 213.250.XX.XX.41635: P, cksum 0x6884 (incorrect (-> 0xc813), 39:61(22)
> ack
> > 1 win 64620 <nop,nop,timestamp 46147 22223084>
>
> Yes, I would expect to see that. The checksum calculation is deferred as
> late as possible. If the packet goes from DomU to DomU and both support
> rx+tx offloading, then it won't be done at all - the tx side will
> 'offload' the calculation to it's virtual card, Dom0 will record the
> fact that the data is correct, and will tell the receiving DomU that the
> checksum has been verified and is good (even though it is incorrect).
> That's one less pass of the data to be done, and if you are transferring
> gigabits/second, thats gigabits of data per second that don't have to be
> added up.

But those packets are going out to network? In that example it was packet between DomU and my Ubuntu workstation's rdesktop.
Previously with old GPLPV drivers if both DomU Windows servers did run GPLPV drivers with checksum offloading enabled then eg remote desktop connection did not opened at all between them also windows network did not worked correctly between DomUs. tcpdump showed then that DomU with GPLPV drivers did have almost all packets marked as incorrect tcp cksum, but if checksum offloading was disabled then tcpdump showed all correct cksums and there was no problems communicating between DomUs. Also checksum was invalid to packets going off the Dom0 to network, eg. remote desktop connections when tcpdumped between my workstation and DomU there was lots of incorrect cksums if Checksum offload was enabled, but if disabled all chksums showed correct value.

> Sort of the same with large send offload.
>
> >
> > I am bridging VLAN interfaces to Xen DomUs
> >
> > eg.
> > vconfig add peth0 13
> > brctl addbr br13
> > brctl addif br13 peth0.13
> > ifconfig br13 up
> >
>
> If you can get iperf on your windows machine, Dom0 machine, and Bacula
> machine, it will make things much easier to test. In your domU, run
> 'iperf -s -w1M' (run as server with 1Mbyte window). In Dom0 and Bacula,
> run 'iperf -c name_of_DomU_windows_machine -w1M'. What is performance
> like:
>
> Dom0<->WinDomU
> Bacula<->WinDomU
>
> With and without offload enabled?

Large Send Offload enabled, checksum offload disabled:
C:\TEMP>iperf -s -w1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
This is Dom0<->WinDomU:
[1872] local 213.250.XX.XX port 5001 connected with 213.250.DOM.0 port 36739
[ ID] Interval       Transfer     Bandwidth
[1872]  0.0-10.0 sec  1.85 GBytes  1.59 Gbits/sec
This is Bacula<->WinDomU:
[1860] local 213.250.XX.XX port 5001 connected with 213.250.BA.CU port 44458
[ ID] Interval       Transfer     Bandwidth
[1860]  0.0-10.0 sec  1.09 GBytes   935 Mbits/sec

Large Send Offload disabled, checksum offload disabled:
C:\TEMP>iperf -s -w1M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 1.00 MByte
------------------------------------------------------------
This is Dom0<->WinDomU:
[1872] local 213.250.XX.XX port 5001 connected with 213.250.DOM.0 port 32779
[ ID] Interval       Transfer     Bandwidth
[1872]  0.0-10.0 sec  1.84 GBytes  1.58 Gbits/sec
This is Bacula<->WinDomU:
[1860] local 213.250.XX.XX port 5001 connected with 213.250.BA.CU port 53880
[ ID] Interval       Transfer     Bandwidth
[1860]  0.0-10.0 sec  1.09 GBytes   936 Mbits/sec

It seems same results with large send offload enabled and disabled.

> Based on that, I'll get you to try some tcpdumps with offload enabled.

I will try to get some tcpdumps.

> Is there a way you can check this without vlan's being involved? If

> there are bugs in the Linux side of things, combining all that
> offloading stuff with vlan might just be too much for it...


Hmm, only if i use eth0/peth0, thats not an vlan bridge.

Terveisin/Regards,
  Pekka Panula, Net Servant Oy
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.