[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Yet another question about multiple NICs



Philippe,

I too struggled at first with multiple NICs in a Debian Lenny machine. 
The problem became a lot easier to fix when I set udev rules on the
NICs.  Otherwise, I noticed that the same NIC would come online with a
different name (eth0, eth1, etc.) after a reboot.

Once I set the udev rules, I was able to really get to the root of the
networking problem.

I now have a three NIC firewall virtualized through Xen.  One NIC is a
physical NIC passed to it through pciback.hide.  The other two NICs in
the firewall DomU are virtual interfaces.

Finally, I had found an article on the Debian site which provided some
guidance on the Xen wrapper script when I was setting up my machine. 
However, the article had a typo or something in it which wasn't working
for me.  I remember posting a comment on the site which fixed the issue
for me.  I could try to find that article again and/or share the wrapper
script that ended up working for my setup.

---
Tom Jensen | President
Digital Toolbox
Email | tom.jensen@xxxxxxxxxxxxxxxxxxxxxx

On Fri, 17 Dec 2010 14:21:03 +0100, Felix Kuperjans
<felix@xxxxxxxxxxxxxxxxxx> wrote:
> Hi Philippe,
> 
> I forgot about Xen's renaming... The firewall rules do nothing special,
> they won't hurt anything.
> Ip addresses are also correct (on both sides), but the routes are
> probably not ok:
> - The dom1 does not have a default route - so it will not be able to
> reach anything outside the two subnets (but should reach anything inside
> of them).
> - It's interesting that dom1's firewall output shows that no packages
> were processed, so maybe you didn't ping anything since the last reboot
> from dom1 or the firewall was loaded by reading it's statistics...
> Still no reasons why you can't ping local machines from the dom1 (and
> sometimes even not from dom0). Have you tried pinging each other, so
> dom0 -> dom1 and vice versa?
> 
> The only remaining thing that denies communication would be ARP, so the
> output of:
> # ip neigh show
> on both machines *directly after* a ping would be nice (within a few
> seconds - use && and a time-terminated ping).
> 
> Regards,
> Felix
> 
> Am 17.12.2010 13:32, schrieb Philippe Combes:
>> Hi Felix,
>>
>> After so long fighting alone with this, it gives some comfort to have
>> so quick an answer. Thanks.
>>
>> Felix Kuperjans a Ãcrit :
>>> just some questions:
>>> - Do you use a firewall in dom0 oder domU?
>>
>> No. Unless there is some default hidden firewall in the default
>> installation of debian lenny :)
>>
>>> - Are those two physical interfaces probably connected to the same
>>> physical network?
>>
>> No. I wrote: "each in a different LAN". This is what I meant. To
>> connect both networks to one another, I would need a routing machine.
>>
>>> - Can you post the outputs of the following commands in both dom0 and
>>> domU when your setup has just startet:
>>
>> In dom0...
>> --
>> $ ip addr show
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>     inet 127.0.0.1/8 scope host lo
>>     inet6 ::1/128 scope host
>>        valid_lft forever preferred_lft forever
>> 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast state UP qlen 1000
>>     link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff
>>     inet6 fe80::214:4fff:fe40:ca74/64 scope link
>>        valid_lft forever preferred_lft forever
>> 3: peth1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast state UP qlen 100
>>     link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff
>>     inet6 fe80::214:4fff:fe40:ca75/64 scope link
>>        valid_lft forever preferred_lft forever
>> 4: eth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>>     link/ether 00:14:4f:40:ca:76 brd ff:ff:ff:ff:ff:ff
>> 5: eth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
>>     link/ether 00:14:4f:40:ca:77 brd ff:ff:ff:ff:ff:ff
>> 6: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
>> 7: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>> 8: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
>> 9: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>> 10: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
>> 11: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>> 12: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
>> 13: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
>>     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
>> 14: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>> state UNKNOWN
>>     link/ether 00:14:4f:40:ca:74 brd ff:ff:ff:ff:ff:ff
>>     inet 172.16.113.121/25 brd 172.16.113.127 scope global eth0
>>     inet6 fe80::214:4fff:fe40:ca74/64 scope link
>>        valid_lft forever preferred_lft forever
>> 15: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>> state UNKNOWN
>>     link/ether 00:14:4f:40:ca:75 brd ff:ff:ff:ff:ff:ff
>>     inet 192.168.24.123/25 brd 192.168.24.127 scope global eth1
>>     inet6 fe80::214:4fff:fe40:ca75/64 scope link
>>        valid_lft forever preferred_lft forever
>> 16: vif1.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast state UNKNOWN qlen 32
>>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
>>     inet6 fe80::fcff:ffff:feff:ffff/64 scope link
>>        valid_lft forever preferred_lft forever
>> 17: vif1.1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc
>> pfifo_fast state UNKNOWN qlen 32
>>     link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff
>>     inet6 fe80::fcff:ffff:feff:ffff/64 scope link
>>        valid_lft forever preferred_lft forever
>> --
>>
>> --
>> $ ip route show
>> 172.16.113.0/25 dev eth0  proto kernel  scope link  src 172.16.113.121
>> 192.168.24.0/25 dev eth1  proto kernel  scope link  src 192.168.24.123
>> default via 192.168.24.125 dev eth1
>> default via 172.16.113.126 dev eth0
>>
>> I tried to remove the first 'default' route, with route del
>> default..., but nothing changed.
>> --
>>
>> --
>> $ iptables -nvL
>> Chain INPUT (policy ACCEPT 744 packets, 50919 bytes)
>>  pkts bytes target     prot opt in     out     source destination
>>
>> Chain FORWARD (policy ACCEPT 22 packets, 1188 bytes)
>>  pkts bytes target     prot opt in     out     source destination
>>     3   219 ACCEPT     all  --  *      *       0.0.0.0/0
>> 0.0.0.0/0           PHYSDEV match --physdev-in vif1.0
>>
>> Chain OUTPUT (policy ACCEPT 582 packets, 76139 bytes)
>>  pkts bytes target     prot opt in     out     source destination
>> --
>>
>> --
>> $ brctl show
>> bridge name     bridge id               STP enabled     interfaces
>> eth0            8000.00144f40ca74       no              peth0
>>                                                         vif1.0
>> eth1            8000.00144f40ca75       no              peth1
>>                                                         vif1.1
>> --
>>
>>
>> In the dom1...
>> --
>> # ip addr show
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
>>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>     inet 127.0.0.1/8 scope host lo
>>     inet6 ::1/128 scope host
>>        valid_lft forever preferred_lft forever
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state UNKNOWN qlen 1000
>>     link/ether 00:16:3e:55:af:c2 brd ff:ff:ff:ff:ff:ff
>>     inet 172.16.113.81/25 brd 172.16.113.127 scope global eth0
>>     inet6 fe80::216:3eff:fe55:afc2/64 scope link
>>        valid_lft forever preferred_lft forever
>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state UNKNOWN qlen 1000
>>     link/ether 00:16:3e:55:af:c3 brd ff:ff:ff:ff:ff:ff
>>     inet 192.168.24.81/25 brd 192.168.24.127 scope global eth1
>>     inet6 fe80::216:3eff:fe55:afc3/64 scope link
>>        valid_lft forever preferred_lft forever
>> --
>>
>> --
>> # ip route show
>> 172.16.113.0/25 dev eth0  proto kernel  scope link  src 172.16.113.81
>> 192.168.24.0/25 dev eth1  proto kernel  scope link  src 192.168.24.81
>> --
>>
>> --
>> # iptables -nvL
>> Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
>>  pkts bytes target     prot opt in     out     source destination
>>
>> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>>  pkts bytes target     prot opt in     out     source destination
>>
>> Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
>>  pkts bytes target     prot opt in     out     source destination
>> --
>>
>>
>>
>> I could not see anything weird in these outputs. Can you ?
>>
>>
>>> - Is your bridge really named equally to your network interface (i.e.
>>> both eth0) or is the network interface renamed? Probably something got
>>> confused there (ip addr will show it anyway).
>>
>> In Xen 3.2.1, the network-bridge script renames eth<i> to peth<i>,
>> bring it down and set a bridge with the name eth<i>.
>>
>>
>> Regards,
>> Philippe
>>
>>
>>>
>>> Am 17.12.2010 11:57, schrieb Philippe Combes:
>>>> Dear Xen users,
>>>>
>>>> I have tried for weeks to have a domU connected to both NICs of the
>>>> dom0, each in a different LAN. Google gave me plenty of tutos
>>>> and HowTos about the subject, including the Xen and the Debian Xen
>>>> wiki's, of course. It seems so simple !
>>>> Some advise to use a simple wrapper to /etc/xen/network-bridge, others
>>>> to let it aside and to set bridges on my own.
>>>> But there must be something obvious that I miss, something so obvious
>>>> that no manual need to explain it, because I tried every solution and
>>>> variant I found on the Internet with no success.
>>>>
>>>> My dom0 first ran CentOS 5.5, Xen 3.0.3. I tried to have eth1 up and
>>>> configured both in dom0 and in a domU. I never succeeded (details
>>>> below), so I followed the advice of some colleagues who told me my
>>>> issues might have come from running a Debian lenny domU on a CentOS
>>>> dom0 (because the domU used the CentOS kernel instead of the one of
>>>> Debian lenny, which is more recent).
>>>>
>>>> So now my dom0 runs an up-to-date Debian lenny, with Xen 3.2.1, but I
>>>> have the same behaviour when trying to get two interfaces in a domU.
>>>> As I said it before, I tried several configurations, but let's stick
>>>> for now to one based on the network-bridge script.
>>>> In /etc/network/interfaces:
>>>>  auto eth0
>>>>  iface eth0 inet dhcp
>>>>  auto eth1
>>>>  iface eth1 inet dhcp
>>>> In /etc/xen/xend-config.sxp:
>>>>  (network-script network-bridge-wrapper)
>>>> /etc/xen/scripts/network-bridge-wrapper:
>>>>  #!/bin/bash
>>>>  dir=$(dirname "$0")
>>>>  "$dir/network-bridge" "$@" vifnum=0 netdev=eth0 bridge=eth0
>>>>  "$dir/network-bridge" "$@" vifnum=1 netdev=eth1 bridge=eth1
>>>> In domU configuration file:
>>>>  vif = [ 'mac=00:16:3E:55:AF:C2,bridge=eth0',
>>>> 'mac=00:16:3E:55:AF:C3,bridge=eth1' ]
>>>>
>>>> With this configuration, I get both bridges eth<i> configured and
>>>> usable: I mean I can ping one machine of every LAN through the
>>>> corresponding interface.
>>>>
>>>> When I start a domU however, the dom0 and the domU are alternatively
>>>> connected to the LAN of eth1, but mutually exclusively. In other
>>>> words, the dom0 is connected to the LAN on eth1 for a couple of
>>>> minutes, but not the domU, and then, with no other reason than
>>>> inactivity on the interface, it switches to the reverse situation:
>>>> domU connected, not the dom0. After another couple of minutes of
>>>> inactivity, back to the first situation, and so on...
>>>> I noticed that the 'switch' does not occur if the one that is
>>>> currently connected performs a continuous ping on another machine of
>>>> the LAN.
>>>>
>>>> This happened with the CentOS too. But I did not try anything else
>>>> under that distro. Under Debian, I tried to have dom0's eth1 down (no
>>>> IP), but then the domU's eth1 does not work at all, not even
>>>> periodically.
>>>>
>>>> I was pretty sure the issue came from the way my bridges were
>>>> configured, that there was something different with the dom0 primary
>>>> interface, etc. Hence  I tried all solutions I could find on the
>>>> Internet with no success.
>>>> I then made a simple test. Instead of binding domU's eth<i> to dom0's
>>>> eth<i>, I bound it to dom0's eth<1-i>: I changed
>>>>  vif = [ 'mac=00:16:3E:55:AF:C2,bridge=eth0',
>>>> 'mac=00:16:3E:55:AF:C3,bridge=eth1' ]
>>>> to
>>>>  vif = [ 'mac=00:16:3E:55:AF:C3,bridge=eth1',
>>>> 'mac=00:16:3E:55:AF:C2,bridge=eth0' ]
>>>> I was very surprised to see that dom0's eth0, domU's eth0 and dom0's
>>>> eth1 were all working normally, not domU's eth1. There was no
>>>> alternance between dom0's eth0 and domU's eth1 there, probably because
>>>> there is always some kind of activity on dom0's eth0 (NFS, monitoring).
>>>>
>>>> So it seems that my issue is NOT related to the dom0 bridges, but to
>>>> the order of the vifs in the domU description. However, in the
>>>> xend.log file, there is no difference in the way both vifs are
>>>> processed.
>>>>  [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice:
>>>> vif : {'bridge': 'eth1', 'mac': '00:16:3E:55:AF:C2
>>>> ', 'uuid': '9dbf60c7-d785-96e2-b036-dc21b669735c'}
>>>>  [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController:
>>>> writing {'mac': '00:16:3E:55:AF:C2', 'handle': '0'
>>>> , 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1',
>>>> 'backend': '/local/domain/0/backend/vif/2/0'} to /local/d
>>>> omain/2/device/vif/0.
>>>>  [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController:
>>>> writing {'bridge': 'eth1', 'domain': 'inpiftest',
>>>> 'handle': '0', 'uuid': '9dbf60c7-d785-96e2-b036-dc21b669735c',
>>>> 'script': '/etc/xen/scripts/vif-bridge', 'mac': '00:16:
>>>> 3E:55:AF:C2', 'frontend-id': '2', 'state': '1', 'online': '1',
>>>> 'frontend': '/local/domain/2/device/vif/0'} to /local/d
>>>> omain/0/backend/vif/2/0.
>>>>  [2010-12-16 14:51:27 3241] INFO (XendDomainInfo:1514) createDevice:
>>>> vif : {'bridge': 'eth0', 'mac': '00:16:3E:55:AF:C3
>>>> ', 'uuid': '1619a9f8-8113-2e3c-e566-9ca9552a3a93'}
>>>>  [2010-12-16 14:51:27 3241] DEBUG (DevController:118) DevController:
>>>> writing {'mac': '00:16:3E:55:AF:C3', 'handle': '1'
>>>> , 'protocol': 'x86_64-abi', 'backend-id': '0', 'state': '1',
>>>> 'backend': '/local/domain/0/backend/vif/2/1'} to /local/d
>>>> omain/2/device/vif/1.
>>>>  [2010-12-16 14:51:27 3241] DEBUG (DevController:120) DevController:
>>>> writing {'bridge': 'eth0', 'domain': 'inpiftest',
>>>> 'handle': '1', 'uuid': '1619a9f8-8113-2e3c-e566-9ca9552a3a93',
>>>> 'script': '/etc/xen/scripts/vif-bridge', 'mac': '00:16:
>>>> 3E:55:AF:C3', 'frontend-id': '2', 'state': '1', 'online': '1',
>>>> 'frontend': '/local/domain/2/device/vif/1'} to /local/d
>>>> omain/0/backend/vif/2/1.
>>>>
>>>> There I am stuck, and it is very frustrating. It looks so simple when
>>>> reading at tutos, that I clearly missed something obvious, but what ?
>>>> Any clue, any track to follow down will be welcome, truly. Please do
>>>> not hesitate to ask me for relevant logs, or for any experiment you
>>>> would think useful.
>>>>
>>>> Thanks for your help,
>>>> Philippe.
>>>>
>>>> _______________________________________________
>>>> Xen-users mailing list
>>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-users
>>>>
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>>
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.