[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Problem pinging the xen guest after live migration
Thank you so much to both of you John and Matej.
Yes, both of your solution, worked when I am live migrating the guest from one dom0 to another dom0.
What i am doing now,
- In Host-B , I am executing the command ,
ping Guest-15a
- In the Host-A , I am executing the command
xm migrate --live Guest-15a Host-B
The result:
Live migration went very well, found there is just 1 sec stop in between but I think that is acceptable.
Our network team says that portfast is already configured at switch .
But as the first option worked , so I hope I can move on - right? If not please advice....
My second question is, mode=4 , ie 802.2ad ( link aggregation ) did not work still yet. I wanted to
set the mode=4 as it is much faster. Found one document , here is the url
It is very good . In this document , the first line it is mentioned :
***** It is important that the native VLANs be identical on both sides of the link
What does it mean VLANs be identical on both sides of the link ? Sorry for the silly question though.
Would you please explain , do I need to check /configure anything on the server side? My configuration is simple enough ,
In my configuration I have used eth0 and eth1 bonded to bond0, and for each subnet, created the
config files, as ifcfg-bond0.15, ifcfg-bond0.16, ifcfg-bond0.17. Then in the /etc/xen/scripts , I have created
a custom file where I mentioned all the bridges for each vlan tag. And this custom file is being called
from /etc/xen/xend-config.sxp...
Please advice, do you need to proceed with aggregation link or with mode=1, which is now configured and
you all helped me a work around...
Thanks again.
On Mon, Jul 5, 2010 at 6:25 AM, John Haxby <john.haxby@xxxxxxxxxx> wrote:
_______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |