[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-API] [XCP 1.6] broad-/multicast & Open vSwitch & SLB bonding -> FAIL

  • To: xen-api@xxxxxxxxxxxxxxxxxxx
  • From: Markus Schuster <ml@xxxxxxxxxxxxxxxxxxxx>
  • Date: Wed, 16 Jan 2013 14:09:12 +0100
  • Delivery-date: Wed, 16 Jan 2013 13:10:42 +0000
  • Followup-to: gmane.comp.emulators.xen.api
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

Hi everybody,

I'm not sure if this is the best place to discuss this issue or if the Open 
vSwitch mailinglist might be better, but let's try:

We have a pool of XCP 1.6 (final) hosts using Open vSwitch. Two NICs form an 
active/active SLB bond for network connectivity. 
Recently we migrated a two node Tomcat cluster to this environment and those 
two VMs had a very hard time beeing reachable from the outside. After 
investigating the problem a bit further we learned Tomcat is using multicast 
for cluster communication and that's where the problem started. 
Open vSwitch sends out the multicast frames on ALL physical interfaces 
belonging to the SLB bond. That causes a lot of confusion on the physical 
switches that XCP hosts are connected to (VM MAC addresses jumping between 
ports multiple times a second). 
After investigating that even further, I noticed the very same problem is 
happening not only for multicast frames, but even for normal broadcast 
frames (ARP, broadcast ping, ...) - luckily Linux servers don't sent that 
much broadcast traffic :)

We spent a few hours digging in the Open vSwitch source code and it looks 
like there's some special handling for broad-/multicast frames - flooding 
them out all ports but the port it came in (classic bridge behavior) - but 
there seems to be no special handling for the SLB case where I'd expect to 
see those packets only on the active slave for the MAC/VLAN combination of 
the sending VM. 

Hope someone can help. 


Xen-api mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.