[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Weird issue on heavy server



Hello All,

1) They all have unique MAC adresses (They are generated by virt-clone utility)
2) The VM can ping to outside world, but it cannot be pinged from outside.(Using xm console)
3) There is no firewall in the VM.
4) Dom0 has default Centos firewall as below:
--------------------------------

Chain INPUT (policy ACCEPT)
target   prot opt source        destination
ACCEPT   udp Â-- Âanywhere       anywhere      Âudp dpt:domain
ACCEPT   tcp Â-- Âanywhere       anywhere      Âtcp dpt:domain
ACCEPT   udp Â-- Âanywhere       anywhere      Âudp dpt:bootps
ACCEPT   tcp Â-- Âanywhere       anywhere      Âtcp dpt:bootps

Chain FORWARD (policy ACCEPT)
target   prot opt source        destination
ACCEPT   all Â-- Âanywhere       192.168.122.0/24  Âstate RELATED,ESTABLISHED
ACCEPT Â Â all Â-- Â192.168.122.0/24 Â Â anywhere
ACCEPT   all Â-- Âanywhere       anywhere
REJECT   all Â-- Âanywhere       anywhere      Âreject-with icmp-port-unreachable
REJECT   all Â-- Âanywhere       anywhere      Âreject-with icmp-port-unreachable
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif1.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif2.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif3.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif4.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif5.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif6.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif7.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif8.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif9.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif10.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif16.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif17.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif18.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif19.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif20.0
ACCEPT   all Â-- Âanywhere       anywhere      ÂPHYSDEV match --physdev-in vif21.0
------------------------------------

5) Current VMs are as below:
-------------------------------------
Name                   ÂID Mem(MiB) VCPUs State  Time(s)
Domain-0 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 0 Â Â11633 Â Â16 r----- Â 9196.0
vm01 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 1 Â Â 1024 Â Â 1 -b---- Â 1652.9
vm02 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 2 Â Â 1024 Â Â 1 -b---- Â 1646.2
vm03 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 3 Â Â 1024 Â Â 1 -b---- Â 1644.9
vm04 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 4 Â Â 1024 Â Â 1 -b---- Â 1651.9
vm05 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 5 Â Â 1024 Â Â 1 -b---- Â 1653.6
vm06 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 6 Â Â 1024 Â Â 1 -b---- Â 2146.9
vm07 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 7 Â Â 1024 Â Â 1 -b---- Â 1270.6
vm08 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 8 Â Â 1024 Â Â 1 -b---- Â 1576.8
vm09 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â 9 Â Â 1024 Â Â 1 -b---- Â 1657.8
vm10 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â10 Â Â 1024 Â Â 1 -b---- Â Â595.5
vm11 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â21 Â Â 1024 Â Â 1 -b---- Â Â Â6.0
vm16 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â16 Â Â 1024 Â Â 1 -b---- Â Â476.5
vm17 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â17 Â Â 1024 Â Â 1 -b---- Â Â479.7
vm18 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â18 Â Â 1024 Â Â 1 -b---- Â Â479.9
vm19 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â19 Â Â 1024 Â Â 1 -b---- Â Â480.1
vm20 Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â20 Â Â 1024 Â Â 1 -b---- Â Â478.1
-----------------------------

The problem is currently on vm11, whereas all other VMs are working fine.


On Tue, Aug 28, 2012 at 1:00 AM, Andrew Pitman <andrewpitman@xxxxxxxxxxx> wrote:
Alexandre,

I think I've run into this as well, on 4.1.2. ÂAs far as I could tell, it was an issue with the bridged ethernet shared by my dom-Us (at the time, all hvm). ÂEach domain had its own MAC address. ÂThe domains could still communicate amongst themselves internally on the Xen server, but nothing was being forwarded between the bridge and the physical network. ÂA "service network restart" on my Fedora 13 dom-0 (2.6.32.40 kernel) fixed the problem, but I still have no idea why it did that in the first place.

I speculate that it might have had something to do with the allocation of vif/tap devices and the fact that they don't seem to be properly recycled when enough VMs are shut down and restarted on the system. ÂOr at least, they always seem to increment as far as I can remember.

Andy
--
Sent from my iPhone appendage

On Aug 27, 2012, at 11:48, Alexandre Kouznetsov <alk@xxxxxxxxxx> wrote:

> El 27/08/12 04:53, DN Singh escribiÃ:
>> The problem that we are facing on two of them is, network on some VMs
>> gets stuck.
>> We cannot ping or SSH the VMs from outside the network, but xm console
>> shows the VMs are fine. The only confusion is why only on 2 VMs, and
>> that too on 2 nodes?
> Local firewall? iptables -L -v shall answer.
>
> Are they PV or HVM?
>
> What about "inside network"? Can a VM (controlled via console) do ping or whatever? Does it behaves differently while communicating with outside world and other VM on the same host?
>
>> Can someone please direct me in the right direction? Also, please let me
>> know if more information is required.
> Check with tcpdump, if you see traffic on your bridge while tying to communicates with VM's.
>
> --
> Alexandre Kouznetsov
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.