[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] Xen Hypervisor memory utilization (leak?)


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Frank S Fejes III <frank@xxxxxxxxx>
  • Date: Thu, 18 Mar 2010 08:28:05 -0500
  • Delivery-date: Thu, 18 Mar 2010 06:29:38 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:sender:from:date:x-google-sender-auth:message-id :subject:to:content-type; b=Xll6rAmaAAewAudl4/26Sr0pCcaL1jbXp3lJRDka5GAGUuncMe+6hGdYaBjFJuDVxV bluH7OOOkFxjYaddQbYMhf8K8aKXec0BUCZkGcbmCeZajSfneTpXItqmG+yqmsx+TKpd hYSKHdl8Vw5MGRW7JAOZmOMfNpDpQOVdJPFrY=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hello, everybody.  Over the past couple months I've noticed that a few
of our Xen servers are reporting (via xentop) hypervisor memory
utilization discrepancies between the sum of VM memory allocations and
the actual memory used.  In time, the free memory drops to a point
where no new VMs can be started.  Below is sample xentop output from
yesterday with extra columns chopped.  The hypervisor reports that it
is using 32gb of ram, however the sum of the running VMs is 16gb.
Ballooning is disabled and dom0_mem is set to 4gb.

xentop - 15:38:20   Xen 3.4.1_19718_04-2.1
12 domains: 2 running, 10 blocked, 0 paused, 0 crashed, 0 dying, 0 shutdown
Mem: 33549248k total, 33241552k used, 307696k free    CPUs: 4 @ 2992MHz
      NAME  STATE   CPU(sec) CPU(%)     MEM(k) MEM(%)  MAXMEM(k) MAXMEM(%)
xxxxxxxxxx --b---       1584    0.2     547840    1.6     547840       1.6
  Domain-0 -----r     165894   34.3    4192768   12.5   no limit       n/a
xxxxxxxxxx --b---      15166   22.3    3071916    9.2    3076096       9.2
xxxxxxxxxx -----r       1328   77.8    1028012    3.1    1028096       3.1
xxxxxxxxxx --b---      55433    4.7    1027880    3.1    1028096       3.1
xxxxxxxxxx --b---      67527    4.8    1027880    3.1    1028096       3.1
xxxxxxxxxx --b---      73019    6.8    1027880    3.1    1028096       3.1
xxxxxxxxxx --b---      61266    6.8    1027880    3.1    1028096       3.1
xxxxxxxxxx --b---      78880    6.2    1028012    3.1    1028096       3.1
xxxxxxxxxx --b---        969    7.7     516012    1.5     516096       1.5
xxxxxxxxxx --b---      10544    1.5    1027880    3.1    1028096       3.1
xxxxxxxxxx --b---      21001    5.8    1027880    3.1    1028096       3.1

Our dom0 is opensuse 11.2 and, as you can see above, our Xen version
is 3.4.1 which is provided by the opensuse project.  This behavior is
only seen on our "development" Xen servers where we routinely
create/clone/destroy a large number of (mostly HVM) VMs, however when
doing any of those functions I cannot reproduce the "leak".  That is
to say, when I create a VM the hypervisor memory free statistic drops
in an amount corresponding to the size of the VM.  When the VM is
destroyed, the free statistic increases back to where it was
previously.

Has anyone else seen this behavior before?  I have not been able to
find any way to get the hypervisor to reclaim some of the lost memory
and, as a result, I have been forced to reboot the server.  On systems
with 20+ VMs where we don't have the option for live migration this
can be rather traumatic.

Any help or insight would be greatly appreciated!

--frank

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.