[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [XEND] separate concept of initial memory size and overhead memory size.



# HG changeset patch
# User kaf24@xxxxxxxxxxxxxxxxxxxx
# Node ID e7d7287ab222d9abd01f84dda21fa444798694ef
# Parent  32013c5118d255044d451768ac65518705b9af73
[XEND] separate concept of initial memory size and overhead memory size.

When a domain (whether para- or fully-virtualized) reports how much
overhead memory it requires (via getDomainMemory in image.py), all such
memory was immediately allocated to the domain itself.  This is
certainly incorrect for HVM domains, since additional
increase_reservation calls are made later in qemu.  Since all ballooned
memory is already taken, qemu will fail.  The fix is to treat the
requested memory size and the overhead size as separate values.  The
requested memory size is immediately allocated to the new domain; the
overhead is left unallocated for whatever else might need it later.

Signed-off-by: Charles Coffing <ccoffing@xxxxxxxxxx>
---
 tools/python/xen/xend/XendDomainInfo.py |    2 +-
 1 files changed, 1 insertion(+), 1 deletion(-)

diff -r 32013c5118d2 -r e7d7287ab222 tools/python/xen/xend/XendDomainInfo.py
--- a/tools/python/xen/xend/XendDomainInfo.py   Fri May 19 16:01:08 2006 +0100
+++ b/tools/python/xen/xend/XendDomainInfo.py   Fri May 19 16:07:36 2006 +0100
@@ -1264,7 +1264,7 @@ class XendDomainInfo:
             m = self.image.getDomainMemory(self.info['memory'] * 1024)
             balloon.free(m)
             xc.domain_setmaxmem(self.domid, m)
-            xc.domain_memory_increase_reservation(self.domid, m, 0, 0)
+            xc.domain_memory_increase_reservation(self.domid, 
self.info['memory'] * 1024, 0, 0)
 
             self.createChannels()
 

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.