[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Live migration downtime longer when mem < maxmem


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Marconi Rivello" <marconirivello@xxxxxxxxx>
  • Date: Wed, 8 Aug 2007 08:25:43 -0300
  • Delivery-date: Fri, 17 Aug 2007 09:38:03 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:date:from:to:subject:mime-version:content-type; b=VZnfk3Pq36+UBzexramXxP5stv0qob42aAZE9sHqqGW0tKE1wSnf42I8CIBEJ2VGQR5jHalXSmHkiiEkR95GpIbZhb/1RP2sr2Lgo/WnU96TbYVCU45YKEsVQaM6Hcm9fVTlhFL4QX/kJ/9u71Jsc+HR4vFrhrOiEbIUxC40+Oo=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi there,

I am experiencing a weird (at least to me) behavior in live migration.

I have 2 VMs, one with 1024M maxmem and another with 512M maxmem. When i set the VM mem to max-mem, I get around 3s of downtime. When I reduce the mem to half the max-mem, I get around 20s downtime! I tested with both VMs.

Just a note: both VMs and phys. machines are idle, the network isn't dedicated, but the tests were run consecutively and repeated times. I'm also working on figuring out why the downtime is longer than a few ms, as described in Xen papers. I think it has to do with the switch...

But in the meantime, why on earth should a VM's migration take longer when I reduce it's mem?

I appreciate any ideas, theories, or even a trivial explanation that would make me look silly :)

Thanks.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.