[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Live migration?


  • To: "Daniel J. Nielsen" <djn@xxxxxxxxxx>
  • From: "Chris Fanning" <christopher.fanning@xxxxxxxxx>
  • Date: Thu, 15 Mar 2007 15:05:31 +0100
  • Cc: xen-users@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 15 Mar 2007 07:04:38 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=Ai07j8OaJmUeJvr0Ay9FlvQ5H7IB8dwFZoFu8qmCA+4wuo7Sp0+70Jv0InGUqnnsWN6GfxFqDMuUljLu8BbDHL+1Q/7fIbXM4AovVt0bd9rrbqVdr3/9HDHeqvTYnn2TMYgkHH1cfmEfcijDhKPgt7C9pzcysB1eyVcHMIuJzDU=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi Daniel,

Well, I've got this up and running on the work bench. I'm still using
100mb/s network but intend to upgrade.

We still use Xen in production, but due to network I/O performance isssues,
I wouldn't recomment our setup if you intend to run more than one or two
virtual machines on each dom0.

Can you please tell me more about this? I wouldn't like to contiue
down this road if it's a dead end.

Thanks.
Chris.

On 3/13/07, Daniel J. Nielsen <djn@xxxxxxxxxx> wrote:
Hi Chris,

We still use Xen in production, but due to network I/O performance isssues,
I wouldn't recomment our setup if you intend to run more than one or two
virtual machines on each dom0.

In the case described below, we discovered we missed the experimental
support for hotpluggable cpus in our custom debian kernels. A recompile
later and all worked without a hitch.

As to network card I'm not sure. We use the ones provided in our HP Proliant
servers. For one of our servers, there are two:

Broadcom Corporation NetXtreme BCM5704 Gigabit Ethernet (rev 10)

I hope this clears something up. I'm not subscribed to xen-users anymore (I
just peruse the archives), so please include me in eventual replis.

/Daniel

On 3/13/07 9:11 AM, "Chris Fanning" <christopher.fanning@xxxxxxxxx> wrote:

> Hello Daniel,
>
> I am trying to setup the same installation that you mention.
> I have dom0's on nfsroot and domU's on AoE.
>
> At present I've got everything on 100mb/s and it doesn't work very
> well. xend takes about 20secs to starup and domU's don't recover
> network connection after migration. I'd like to try it with 1000mb/s.
>
> Can you please recommend me the network cards I should use? I have
> some Dlinks but (for some reason) the modules don't get loaded even
> though lspci does show them.
> The thin server boxes need to boot pxe (of course).
>
> Thanks.
> Chris.
>
> On 9/15/06, Daniel Nielsen <djn@xxxxxxxxxx> wrote:
>> Hi.
>>
>> We are currently migrating to Xen for our production servers, version
>> 3.0.2-2. But we are having problems with the live-migration feature.
>>
>> Our setup is this;
>>
>> We run debian-stable (sarge), with selected packages from backports.org. Our
>> glibc is patched to be "Xen-friendly". In our test-setup, we have two dom0's
>> both netbooting from a central NFS/tftpboot server e.g. not storing anything
>> locally. Both dom0's have two ethernet ports. eth0 is used by the dom0 and
>> eth1 is bridged to Xen.
>>
>> Our domUs also use a NFS-root, also debian sarge. They use the same kernel.
>> They have no "ties" to the local machine, except for network access, they do
>> not mount any localdrives or files as drives. All is exclusively run through
>> NFS and in RAM.
>>
>> When migrating machines (our dom0's are named after fictional planets, and
>> virtual machines after fictional spaceships):
>>
>> geonosis:/ root# xm migrate --live serenity lv426
>> it just hangs.
>>
>> A machine called serenity pops up on lv426:
>>
>> lv426:/ root# xm list
>> Name                              ID Mem(MiB) VCPUs State  Time(s)
>> Domain-0                           0      128     4 r----- 21106.6
>> serenity                           8     2048     1 --p---     0.0
>> lv426:/ root#
>>
>> But nothing happens.
>>
>> If we migrate a lower mem domU with eg. 256MiB it works without a hitch.
>> If we migrate a domU with eg. 512 MiB it sometimes works, othertimes it
>> doesn't. But for domUs with 2GiB ram, it consistently fails.
>>
>> In the above example, if we wait quite some hours, then serenity will stop
>> responding, and geonosis will be left with a
>>
>> genosis:/ root# xm list
>> Name                              ID Mem(MiB) VCPUs State  Time(s)
>> Domain-0                           0      128     4 r----- 21106.6
>> Zombie-serenity                    8      2048    2 -----d  3707.8
>> geonosis:/ root#
>>
>>
>> I have attached the relevant entries from the xend.log files from both
>> geonosis and lv426.
>>
>> I hope somebody is able to clear up what we are missing.
>>
>> I noticed in geonosis.log, that it wants 2057 MiB. I'm unsure of what it
>> means...?
>>
>>
>> /Daniel
>> Portalen
>>
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>>
>>



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.