[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 10GBit Ethernet network performance (was: Re: Experience with Xen & AMD Opteron 4200 series?)

2012/6/24 Linus van Geuns <linus@xxxxxxxxxxxxx>:

> Between two dom0 instances, I get only 200 up to 250MByte/s.
> I also tried the same between a dom0 and a plain hardware instance and

Steps you can try:

- do NOT configure a bridge in dom0, try normal eth0 <-> eth0 comms
   (The linux bridge is a BRIDGE. that's the things everyone stopped
using in 1998)

- dom0 vpcu pinning
   (because I wonder if the migrations between vcpus make things trip)

Things to keep in mind:
There was a handful of "new network" implementations to speed up IO
performance (i.e. xenloop, fido) between domUs. All have not gotten
anywhere although they were, indeed, fast.
Stub IO domains as a concept were invented to take IO processing out
of dom0. I have NO IDEA why that would be faster, but someone though
it does make a difference, otherwise it would not be there.
It is very probable that with switching to a SR-IOV nic, the whole
issue is gone. Some day I'll afford a SolarFlare 61xx NIC and
benchmark on my own.
The key thing with using "vNIC"s assigned to the domUs is that you get
rid of the bridging idiocy and have more io queues; some nic will
switch between multiple domUs on the same nic, and even if they can't:
The 10gig switch next to your dom0 is definitely faster than the SW
bridge code.
OpenVSwitch is a nice solution to generally replace the bridge but i
haven't seen anyone say that it gets anywhere near hardware

Last: I'm not sure if you will see the problem solved. I think it has
never gotten extremely high prio.


Xen-users mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.