[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen master] libxc: fix claim mode when creating HVM guest



commit 46b5f0fd1fe7a49fb993fbad8a1fa232e2253afc
Author:     Wei Liu <wei.liu2@xxxxxxxxxx>
AuthorDate: Mon Jan 27 17:53:38 2014 +0000
Commit:     Ian Campbell <ian.campbell@xxxxxxxxxx>
CommitDate: Tue Feb 4 14:40:49 2014 +0000

    libxc: fix claim mode when creating HVM guest
    
    The original code is wrong because:
    * claim mode wants to know the total number of pages needed while
      original code provides the additional number of pages needed.
    * if pod is enabled memory will already be allocated by the time we try
      to claim memory.
    
    So the fix would be:
    * move claim mode before actual memory allocation.
    * pass the right number of pages to hypervisor.
    
    The "right number of pages" should be number of pages of target memory
    minus VGA_HOLE_SIZE, regardless of whether PoD is enabled.
    
    This fixes bug #32.
    
    Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx>
    Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
    Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
    Acked-by: Ian Campbell <ian.campbell@xxxxxxxxxx>
    Tested-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
    Cc: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
---
 tools/libxc/xc_hvm_build_x86.c |   36 +++++++++++++++++++++++-------------
 1 files changed, 23 insertions(+), 13 deletions(-)

diff --git a/tools/libxc/xc_hvm_build_x86.c b/tools/libxc/xc_hvm_build_x86.c
index 77bd365..dd3b522 100644
--- a/tools/libxc/xc_hvm_build_x86.c
+++ b/tools/libxc/xc_hvm_build_x86.c
@@ -49,6 +49,8 @@
 #define NR_SPECIAL_PAGES     8
 #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
 
+#define VGA_HOLE_SIZE (0x20)
+
 static int modules_init(struct xc_hvm_build_args *args,
                         uint64_t vend, struct elf_binary *elf,
                         uint64_t *mstart_out, uint64_t *mend_out)
@@ -302,14 +304,31 @@ static int setup_guest(xc_interface *xch,
     for ( i = mmio_start >> PAGE_SHIFT; i < nr_pages; i++ )
         page_array[i] += mmio_size >> PAGE_SHIFT;
 
+    /*
+     * Try to claim pages for early warning of insufficient memory available.
+     * This should go before xc_domain_set_pod_target, becuase that function
+     * actually allocates memory for the guest. Claiming after memory has been
+     * allocated is pointless.
+     */
+    if ( claim_enabled ) {
+        rc = xc_domain_claim_pages(xch, dom, target_pages - VGA_HOLE_SIZE);
+        if ( rc != 0 )
+        {
+            PERROR("Could not allocate memory for HVM guest as we cannot claim 
memory!");
+            goto error_out;
+        }
+    }
+
     if ( pod_mode )
     {
         /*
-         * Subtract 0x20 from target_pages for the VGA "hole".  Xen will
-         * adjust the PoD cache size so that domain tot_pages will be
-         * target_pages - 0x20 after this call.
+         * Subtract VGA_HOLE_SIZE from target_pages for the VGA
+         * "hole".  Xen will adjust the PoD cache size so that domain
+         * tot_pages will be target_pages - VGA_HOLE_SIZE after
+         * this call.
          */
-        rc = xc_domain_set_pod_target(xch, dom, target_pages - 0x20,
+        rc = xc_domain_set_pod_target(xch, dom,
+                                      target_pages - VGA_HOLE_SIZE,
                                       NULL, NULL, NULL);
         if ( rc != 0 )
         {
@@ -333,15 +352,6 @@ static int setup_guest(xc_interface *xch,
     cur_pages = 0xc0;
     stat_normal_pages = 0xc0;
 
-    /* try to claim pages for early warning of insufficient memory available */
-    if ( claim_enabled ) {
-        rc = xc_domain_claim_pages(xch, dom, nr_pages - cur_pages);
-        if ( rc != 0 )
-        {
-            PERROR("Could not allocate memory for HVM guest as we cannot claim 
memory!");
-            goto error_out;
-        }
-    }
     while ( (rc == 0) && (nr_pages > cur_pages) )
     {
         /* Clip count to maximum 1GB extent. */
--
generated by git-patchbot for /home/xen/git/xen.git#master

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.