[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] GPL PV drivers for Windows 0.9.11-pre12


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Victor Hugo dos Santos" <listas.vhs@xxxxxxxxx>
  • Date: Thu, 4 Sep 2008 11:49:05 -0400
  • Delivery-date: Thu, 04 Sep 2008 08:49:42 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references; b=BeHPrTN00a+TLJvTpla/VTSgr9RiLC1usqNOHKgQThvviw/b3CQSDeiJlG7rsJPUVE xLMOJ/9mRzw5HGJjNAMk3N2C96hDsMEIBs8BOzBVA7L1pUmsza7SymlifKT+E3CNAqyb MNIlddLX4jmMplGqvPv8xWhfUjCRsibrlbXX4=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

On Tue, Aug 26, 2008 at 7:04 AM, James Harper
<james.harper@xxxxxxxxxxxxxxxx> wrote:
>> While these numbers seem to be true for the RX/TX tests between domU
> <->
>> dom0, they don't apply to the domU <-> other physical node on lan. The
>> RX is fine, on my 100mbit lan I get about 94 / 95mbit. But the TX is
>> very poor: only 18-20mbit. This has been since all versions I've
> tested
>> before, but of course hoped this one would make the difference. I've
>> compared the results with a Linux domU PV, which does perform as
>> expected (94-95mbit for both RX/TX).
>>
>
> Thanks for the feedback. I wonder if Large Send Offload is causing
> problems somewhere... Can you try turning off Large Send Offload and
> report the results?

Hello,

I have 2 systems (same hardware) and in each one of its, I have 2
windows 2003 (all updates).
witch gplpv -pre10 the system work, but the performance of network is
very, very slow
with gplpv -pre13 same problem.
and disabling the "Large Send Offload" network work fine:

==================
iperf tests
in server with this command: iperf -s
in client with this command: iperf -c IP_REMOTE_SERVER -d -l 1M -w 1M

VM1 - before change
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  6]  0.0-10.1 sec   113 MBytes  93.9 Mbits/sec
[  4]  0.0-12.4 sec  2.00 MBytes  1.36 Mbits/sec
-----------------------------------------------------------
VM1 - after change
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.1 sec  58.0 MBytes  48.4 Mbits/sec
[  6]  0.0-10.1 sec   104 MBytes  86.6 Mbits/sec
------------------------------------------------------------

VM2 - before change
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  6]  0.0-10.0 sec   101 MBytes  84.7 Mbits/sec
[  4]  0.0-10.1 sec   768 KBytes   624 Kbits/sec
------------------------------------------------------------
VM2 - after change
------------------------------------------------------------
[ ID] Interval       Transfer     Bandwidth
[  5]  0.0-10.0 sec  37.3 MBytes  31.2 Mbits/sec
[  6]  0.0-10.0 sec   111 MBytes  93.0 Mbits/sec
------------------------------------------------------------
==================

but in Dom0 I see this errors (look dropped packets) in the servers

Server1
========
peth2     Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
          RX packets:31459415 errors:0 dropped:4058498 overruns:0 frame:0
          TX packets:10061928 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:1933464944 (1.8 GiB)  TX bytes:2618272070 (2.4 GiB)
          Memory:d8420000-d8440000
========

Server2
========
peth2     Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
          RX packets:28573933 errors:0 dropped:50760 overruns:0 frame:0
          TX packets:1347689 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:100
          RX bytes:4134924775 (3.8 GiB)  TX bytes:1336000345 (1.2 GiB)
          Base address:0x3080 Memory:d8540000-d8560000
========

with Large Send Offload "active", no dropped packets for long time (never??)
after the change.. the systems reports various dropped packets.

exists any problem (now or in future) with droppers packets ??
Is possible set Large Send Offload disabled for default ?? consequences ??

thanks

-- 
-- 
Victor Hugo dos Santos
Linux Counter #224399

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.