[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] domU Networking Issues



Hi Tapas,

Thanks for your response.  Disabling STP does not change the observed
behavior.  The domU is still "missing" from external hosts via SSH and
PING, however, I am able to PING it from dom0.

Thanks,
John

On Mon, 2010-06-14 at 16:36 +0530, Tapas Mishra wrote:
> Ok I will try to answer but I am not an expert of this subject.
> I see from brctl show output Spanning Tree Protocol is enabled can you
> set it to no and then check the same.
> Is the behavior same in case STP is disabled.
> On Mon, Jun 14, 2010 at 4:06 PM, JohnD <lists@xxxxxxxxxxxxxxx> wrote:
> > Hi Tapas,
> >
> > Thanks for your reply.  The problem I have is that both the physical
> > server and the virtual instances appear on the network and then, with
> > some amount of idle time, disappear.
> >
> > For example, I sent my original question last night (from North America)
> > and I was able to get to both the physical servers and the virtual
> > instances.  However, this morning, I received "no route to host" on the
> > physical server and after about 10 minutes (repeated pings with
> > "Destination Host Unreachable") was able to log into it again - meaning,
> > it came back online.
> >
> > At any rate, here are the results of the command:
> >
> > # brctl show
> > bridge name     bridge id               STP enabled     interfaces
> > virbr0          8000.000000000000       yes
> > xenbr0          8000.feffffffffff       no              vif2.0
> >                                                        vif1.0
> >                                                        peth0
> >                                                        vif0.0
> >
> > Thanks,
> > John
> >
> > On Mon, 2010-06-14 at 10:05 +0530, Tapas Mishra wrote:
> >> I am not able to understand problem from your message
> >> correct me if I am wrong.
> >> Your DomU seems to not reply to ping when you are booting it up.
> >> Or after DomU is up it drops packets suddenly while it was replying to 
> >> ping.
> >> Give the output of brctl show.
> >>
> >> On Mon, Jun 14, 2010 at 4:54 AM, JohnD <lists@xxxxxxxxxxxxxxx> wrote:
> >> > Hi,
> >> >
> >> > I have Xen 3.0.3 installed on CentOS 5.5 with 2 paravirtualized domU's
> >> > configured to use the default bridge networking and am having networking
> >> > issues.  If I boot the computer fresh I am able to ping dom0 and can SSH
> >> > into it and start the domU's.
> >> >
> >> > The following ping snippet demonstrates the problem.  The first group of
> >> > "Destination Host Unreachable" is while the domU is being initialized
> >> > and then you can see that it is up and running when the pings start to
> >> > resolve correctly.
> >> >
> >> > # ping 192.168.0.16
> >> > PING 192.168.0.16 (192.168.0.16) 56(84) bytes of data.
> >> > >From 192.168.0.20 icmp_seq=1 Destination Host Unreachable
> >> > >From 192.168.0.20 icmp_seq=2 Destination Host Unreachable
> >> > >From 192.168.0.20 icmp_seq=3 Destination Host Unreachable
> >> > 64 bytes from 192.168.0.16: icmp_seq=4 ttl=64 time=2288 ms
> >> > 64 bytes from 192.168.0.16: icmp_seq=5 ttl=64 time=1279 ms
> >> > 64 bytes from 192.168.0.16: icmp_seq=6 ttl=64 time=279 ms
> >> > 64 bytes from 192.168.0.16: icmp_seq=7 ttl=64 time=0.173 ms
> >> > 64 bytes from 192.168.0.16: icmp_seq=8 ttl=64 time=0.167 ms
> >> >
> >> > However, if I wait a minute or so, and ping again, the results are:
> >> >
> >> > # ping 192.168.0.16
> >> > PING 192.168.0.16 (192.168.0.16) 56(84) bytes of data.
> >> > >From 192.168.0.20 icmp_seq=2 Destination Host Unreachable
> >> > >From 192.168.0.20 icmp_seq=3 Destination Host Unreachable
> >> > >From 192.168.0.20 icmp_seq=4 Destination Host Unreachable
> >> >
> >> > It like the virtual instance is there, and then gone.  Here is the
> >> > output of ifconfig on the dom0:
> >> >
> >> > eth0      Link encap:Ethernet  HWaddr 00:30:48:DD:BF:E6
> >> >          inet addr:192.168.0.15  Bcast:192.168.0.255
> >> > Mask:255.255.255.0
> >> >          inet6 addr: fe80::230:48ff:fedd:bfe6/64 Scope:Link
> >> >          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >> >          RX packets:100938 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:105438 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:0
> >> >          RX bytes:22554988 (21.5 MiB)  TX bytes:36166834 (34.4 MiB)
> >> >
> >> > lo        Link encap:Local Loopback
> >> >          inet addr:127.0.0.1  Mask:255.0.0.0
> >> >          inet6 addr: ::1/128 Scope:Host
> >> >          UP LOOPBACK RUNNING  MTU:16436  Metric:1
> >> >          RX packets:59 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:59 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:0
> >> >          RX bytes:8735 (8.5 KiB)  TX bytes:8735 (8.5 KiB)
> >> >
> >> > peth0     Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
> >> >          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
> >> >          UP BROADCAST RUNNING NOARP PROMISC  MTU:1500  Metric:1
> >> >          RX packets:43429 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:81914 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:1000
> >> >          RX bytes:3861115 (3.6 MiB)  TX bytes:35054903 (33.4 MiB)
> >> >          Memory:faee0000-faf00000
> >> >
> >> > vif0.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
> >> >          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
> >> >          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
> >> >          RX packets:109516 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:104013 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:0
> >> >          RX bytes:36479083 (34.7 MiB)  TX bytes:23210637 (22.1 MiB)
> >> >
> >> > vif3.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
> >> >          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
> >> >          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
> >> >          RX packets:66747 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:33219 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:32
> >> >          RX bytes:19263437 (18.3 MiB)  TX bytes:2202911 (2.1 MiB)
> >> >
> >> > virbr0    Link encap:Ethernet  HWaddr 00:00:00:00:00:00
> >> >          inet addr:192.168.122.1  Bcast:192.168.122.255
> >> > Mask:255.255.255.0
> >> >          inet6 addr: fe80::200:ff:fe00:0/64 Scope:Link
> >> >          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
> >> >          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:0
> >> >          RX bytes:0 (0.0 b)  TX bytes:6452 (6.3 KiB)
> >> >
> >> > xenbr0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
> >> >          UP BROADCAST RUNNING NOARP  MTU:1500  Metric:1
> >> >          RX packets:290 errors:0 dropped:0 overruns:0 frame:0
> >> >          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> >> >          collisions:0 txqueuelen:0
> >> >          RX bytes:28601 (27.9 KiB)  TX bytes:0 (0.0 b)
> >> >
> >> > I have the dom0 and the 2 domU's setup with using static IPV4 networking
> >> > and created the domU's to use the xenbro0 bridge during initial
> >> > creation.  Here is the config file for one of the domU's:
> >> >
> >> > name = "mailsrv01"
> >> > uuid = "7f378a13-21aa-0ea0-9cc9-b348bcb2eb94"
> >> > maxmem = 3000
> >> > memory = 3000
> >> > vcpus = 1
> >> > bootloader = "/usr/bin/pygrub"
> >> > on_poweroff = "destroy"
> >> > on_reboot = "restart"
> >> > on_crash = "restart"
> >> > disk = [ "tap:aio:/etc/xen/mailsrv01-os-hdd.img,xvda,w" ]
> >> > vif = [ "mac=00:16:36:43:fb:b6,bridge=xenbr0,script=vif-bridge" ]
> >> >
> >> > I have disabled iptables and the firewall on the domU's, hoping that
> >> > would address the issue, but it hasn't.
> >> >
> >> > Not sure if this is any help, but here is a snippet of /var/log/messages
> >> > when I start the above domU using "xm create mailsrv01":
> >> >
> >> > Jun 13 18:13:38 mailprod01 kernel: ADDRCONF(NETDEV_CHANGE): vif4.0: link
> >> > becomes ready
> >> > Jun 13 18:13:38 mailprod01 kernel: xenbr0: topology change detected,
> >> > propagating
> >> > Jun 13 18:13:38 mailprod01 kernel: xenbr0: port 3(vif4.0) entering
> >> > forwarding state
> >> > Jun 13 18:14:53 mailprod01 ntpd[3572]: synchronized to 173.203.202.87,
> >> > stratum 2
> >> > Jun 13 18:15:37 mailprod01 kernel: xenbr0: port 3(vif4.0) entering
> >> > disabled state
> >> > Jun 13 18:15:37 mailprod01 kernel: device vif4.0 left promiscuous mode
> >> > Jun 13 18:15:37 mailprod01 kernel: xenbr0: port 3(vif4.0) entering
> >> > disabled state
> >> > Jun 13 18:16:18 mailprod01 kernel: tap tap-5-51712: 2 getting info
> >> > Jun 13 18:16:18 mailprod01 kernel: device vif5.0 entered promiscuous
> >> > mode
> >> > Jun 13 18:16:18 mailprod01 kernel: ADDRCONF(NETDEV_UP): vif5.0: link is
> >> > not ready
> >> > Jun 13 18:16:22 mailprod01 kernel: blktap: ring-ref 8, event-channel 6,
> >> > protocol 1 (x86_64-abi)
> >> > Jun 13 18:16:30 mailprod01 kernel: ADDRCONF(NETDEV_CHANGE): vif5.0: link
> >> > becomes ready
> >> > Jun 13 18:16:30 mailprod01 kernel: xenbr0: topology change detected,
> >> > propagating
> >> > Jun 13 18:16:30 mailprod01 kernel: xenbr0: port 3(vif5.0) entering
> >> > forwarding state
> >> >
> >> > I have spent 3 weeks working through this issue and I'm still unable to
> >> > resolve it.  Can anyone guide me to what I am missing or need to change
> >> > in order to resolve the issue of not being able to "see" the domU's?
> >> >
> >> > Thanks,
> >> > John
> >> >
> >> >
> >> > _______________________________________________
> >> > Xen-users mailing list
> >> > Xen-users@xxxxxxxxxxxxxxxxxxx
> >> > http://lists.xensource.com/xen-users
> >> >
> >>
> >>
> >>
> >
> >
> >
> 
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.