[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen 10GBit Ethernet network performance (was: Re: Experience with Xen & AMD Opteron 4200 series?)



Hey Florian,

thank you for your input.

On Mon, Jun 25, 2012 at 11:42 PM, Florian Heigl <florian.heigl@xxxxxxxxx> wrote:
> 2012/6/24 Linus van Geuns <linus@xxxxxxxxxxxxx>:
>
>> Between two dom0 instances, I get only 200 up to 250MByte/s.
>>
>> I also tried the same between a dom0 and a plain hardware instance and
>
> Steps you can try:
>
> - do NOT configure a bridge in dom0, try normal eth0 <-> eth0 comms
> Â (The linux bridge is a BRIDGE. that's the things everyone stopped
> using in 1998)

As I am still testing network performance in dom0, I did not yet setup
any virtual networking.
All tests were done in the "direct" interfaces (eth*) in dom0 and on
bare hardware,no bridging or virtual switches involved.
When I do the tests on bare hardware, I get about 550MByte/s; within
dom0 speed drops to about 250MByte/s.

>
> - dom0 vpcu pinning
> Â (because I wonder if the migrations between vcpus make things trip)

Already tried that and it had no affect at all. :-/
Any ideas?


>
> Things to keep in mind:
> ----------------------------------
> There was a handful of "new network" implementations to speed up IO
> performance (i.e. xenloop, fido) between domUs. All have not gotten
> anywhere although they were, indeed, fast.
> Stub IO domains as a concept were invented to take IO processing out
> of dom0. I have NO IDEA why that would be faster, but someone though
> it does make a difference, otherwise it would not be there.

Or that someone did not want to process network traffic within the
privileged domain. ;-)

> It is very probable that with switching to a SR-IOV nic, the whole
> issue is gone. Some day I'll afford a SolarFlare 61xx NIC and
> benchmark on my own.

As I am testing dom0 network performance on "direct" interfaces,
SR-IOV should not make a difference.
Those X520-DA2 support SR-IOV and VMDq.

> The key thing with using "vNIC"s assigned to the domUs is that you get
> rid of the bridging idiocy and have more io queues; some nic will
> switch between multiple domUs on the same nic, and even if they can't:
> The 10gig switch next to your dom0 is definitely faster than the SW
> bridge code.

Is it possible to live migrate domUs using SR-IOV "vNICs"?
Basically, if Xen would migrate the state of that vNIC to the target
host, it could work.
(Without knowing any details of SR-IOV at all :-D).

> OpenVSwitch is a nice solution to generally replace the bridge but i
> haven't seen anyone say that it gets anywhere near hardware
> performance.
>
> Last: I'm not sure if you will see the problem solved. I think it has
> never gotten extremely high prio.

First, I would like to identify the problem(s) that limit dom0 10GE
speed on my systems. ;-)

Regards, Linus

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.