[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH][XEND]Fix for removing devices at save/destroy domain - Take 2.


  • To: "xen-devel" <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
  • Date: Thu, 17 May 2007 14:17:00 +0200
  • Delivery-date: Thu, 17 May 2007 05:29:40 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AceYfUSBoNRe5QtsT82yyJfHNJ8lZg==
  • Thread-topic: [PATCH][XEND]Fix for removing devices at save/destroy domain - Take 2.

The function XendDomainInfo:_releaseDevices() is called during the
save/destroy phase of a domain. It made some attempt to clean up the
devices, but wasn't complete, leaving dangling devices in the xenstore.
Not a big problem with normal use of Xen, but a buildup over a large
number of save/destroy instances, it would make the xenstore database
grow quite large, which in turn meant swap-thrashing in Dom0. 

This patch makes use of the destroyDevices() function in XendDomainInfo.
This function needed some re-writing to make it work correctly - I think
it had some old code (not sure how old, as xm annotate says that it's
changeset 12071, but that, I think, is when it was split out from
XendDomain.py, rather than when it was created).

I have tested this over a few hundred save/restore cycles [two domains
constantly saved/restored with a short sleep to let them process some
work] combined with a loop of "xenstore-ls|wc". The output of the latter
is pretty much constant (it obviously varies a bit depending on when in
the save/restore cycle it hits). Previously, it would increase by some
10 lines or so per save/restore cycle. 


Signed of by: Mats Petersson (mats.petersson@xxxxxxx)

--
Mats

Attachment: patch.destroy_devices_properly
Description: patch.destroy_devices_properly

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.