[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] xen 3 networking concepts



 
> After starting xend, with the default bridging setup, the 
> following kernel messages are displayed:

After a fresh boot, execute 'sh -x /etc/xen/scripts/network-bridge
start' and look at the output.

I suspect the script is bailing out part way through.

Ian
 
>         Bridge firewalling registered
>         device vif0.0 entered promiscuous mode
>         xen-br0: port 1(vif0.0) entered learning state
>         device eth0 entered promiscuous mode
>         xen-br0: port 2(eth0) entered learning state
>         xen-br0: topology change detected, propogating
>         xen-br0: port 1(vif0.0) entered forwarding state
>         xen-br0: topology change detected, propogating
>         xen-br0: port 2(eth0) entered forwarding state
>         eth0: Media Link On 100mbps full-duplex
> 
> "ifconfig -a" output looks like:
> 
>         eth0      Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
>                   inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
>                   UP BROADCAST RUNNING NOARP MULTICAST  
> MTU:1500  Metric:1
>                   RX packets:99 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:36 errors:0 dropped:0 overruns:0 
> carrier:0
>                   collisions:0 txqueuelen:1000
>                   RX bytes:12369 (12.0 KiB)  TX bytes:2429 (2.3 KiB)
>                   Interrupt:11 Base address:0xd400
> 
>         veth0     Link encap:Ethernet  HWaddr 00:0B:6A:A7:DD:76
>                   inet addr:192.168.0.20  Bcast:192.168.0.255 
>  Mask:255.255.255.255
>                   inet6 addr: fe80::20b:6aff:fea7:dd76/64 Scope:Link
>                   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>                   RX packets:16 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:9 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:1835 (1.7 KiB)  TX bytes:714 (714.0 b)
> 
>         veth1     Link encap:Ethernet  HWaddr 00:00:00:00:00:00
>                   BROADCAST MULTICAST  MTU:1500  Metric:1
>                   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
> 
>         (same up to veth7)
> 
>         vif0.1    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
>                   BROADCAST MULTICAST  MTU:1500  Metric:1
>                   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
> 
>         (same up to vif0.7)
> 
>         xen-br0   Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
>                   inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
>                   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>                   RX packets:11 errors:0 dropped:0 overruns:0 frame:0
>                   TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
>                   collisions:0 txqueuelen:0
>                   RX bytes:1285 (1.2 KiB)  TX bytes:378 (378.0 b)
> 
> "ip ro" is now as follows:
> 
>         192.168.0.0/24 dev veth0  proto kernel  scope link  
> src 192.168.0.20
>         default via 192.168.0.7 dev veth0
> 
> and networking does not work.
> 
> I can make it work by doing following:
> 
> # ip addr add 192.168.0.20 dev xen-br0
> # ip ro del 192.168.0.0/24
> # ip ro del default
> # ip ro add default via 192.168.0.7 dev xen-br0
> 
> Is this normal?  What is the neatest way to make it so my 
> debian box is going to have working network connectivity 
> after a reboot and xend starts?
> 
> Cheers,
> Andy
> 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.