[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Live migration downtime longer when mem < maxmem


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Marconi Rivello" <marconirivello@xxxxxxxxx>
  • Date: Wed, 8 Aug 2007 09:13:45 -0300
  • Delivery-date: Wed, 08 Aug 2007 05:11:26 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:in-reply-to:mime-version:content-type:references; b=iLE3+XVEIq7E90uDINBBcfDcoJB0NG0rXSmmQpeM5fRlxpdBqaaZKhB+Mh3kiEVyBYqSbAcAGnCpg7p5ouGG0HOV1+tF2i88z5GX1qZSRLwi7PxCn6Hw7SvSsEpYxAEK3RXJxGbjBESOPy74+JSdUyfLKeiSzWzihYxxuZuqVJU=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi there,

I am experiencing a weird (at least to me) behavior in live migration.

I have 2 VMs, one with 1024M maxmem and another with 512M maxmem. When i set the VM mem to max-mem, I get around 3s of downtime. When I reduce the mem to half the max-mem, I get around 20s downtime! I tested with both VMs.

Just a note: both VMs and phys. machines are idle, the network isn't dedicated, but the tests were run consecutively and repeated times. I'm also working on figuring out why the downtime is longer than a few ms, as described in Xen papers. I think it has to do with the switch...

But in the meantime, why on earth should a VM's migration take longer when I reduce it's mem?

I appreciate any ideas, theories, or even a trivial explanation that would make me look silly :)

Thanks.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.