[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxc: Defer initialization of start_page for HVM guests



On 12/22/2015 04:42 AM, Roger Pau Monné wrote:
El 22/12/15 a les 0.45, Boris Ostrovsky ha escrit:
With commit 8c45adec18e0 ("libxc: create unmapped initrd in domain
builder if supported") location of ramdisk may not be available to
HVMlite guests by the time alloc_magic_pages_hvm() is invoked if the
guest supports unmapped initrd.

Since the only operation related to allocating magic pages in that
routine is allocation of HVMlite start info we can move everything
else to a later point such as xc_dom_arch.start_info().

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
---
I am not sure xc_dom_arch.start_info() is the right place neither since we
still are doing things that have nothing to do with start_info.

  tools/libxc/xc_dom_x86.c |   45 ++++++++++++++++++++++++++++++---------------
  1 files changed, 30 insertions(+), 15 deletions(-)

diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
index 3960875..f64079e 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -585,6 +585,32 @@ static void build_hvm_info(void *hvm_info_page, struct 
xc_dom_image *dom)
static int alloc_magic_pages_hvm(struct xc_dom_image *dom)
  {
+        struct xc_dom_seg seg;
+        size_t start_info_size = sizeof(struct hvm_start_info);
+
+        if ( dom->device_model )
+            return 0;
+
+        if ( dom->cmdline )
+            start_info_size += ROUNDUP(strlen(dom->cmdline) + 1, 8);
+        if ( dom->ramdisk_blob )
+            /* Limited to one module. */
+            start_info_size += sizeof( struct hvm_modlist_entry);
                                          ^ extra space.
+
+        if ( xc_dom_alloc_segment(dom, &seg, "HVMlite start info", 0,
+                                  start_info_size) )
+        {
+            DOMPRINTF("Unable to reserve memory for the start info");
+            return -1;
+        }
+
+        dom->start_info_pfn = seg.pfn;
+
+        return 0;
+}
+
+static int start_info_hvm(struct xc_dom_image *dom)
+{
      unsigned long i;
      void *hvm_info_page;
      uint32_t *ident_pt, domid = dom->guest_domid;
@@ -636,7 +662,6 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom)
if ( !dom->device_model )
      {
-        struct xc_dom_seg seg;
          struct hvm_start_info *start_info;
          char *cmdline;
          struct hvm_modlist_entry *modlist;
@@ -653,17 +678,9 @@ static int alloc_magic_pages_hvm(struct xc_dom_image *dom)
          if ( dom->ramdisk_blob )
              start_info_size += sizeof(*modlist); /* Limited to one module. */
- rc = xc_dom_alloc_segment(dom, &seg, "HVMlite start info", 0,
-                                  start_info_size);
-        if ( rc != 0 )
-        {
-            DOMPRINTF("Unable to reserve memory for the start info");
-            goto out;
-        }
-
          start_page = xc_map_foreign_range(xch, domid, start_info_size,
IMHO, I would stash start_info_size somewhere inside xc_dom_image in
order to avoid calculating it twice.

Yes, I should do this.


Also, why is everything done inside of alloc_magic_pages_hvm moved to
start_info_hvm? AFAICT we should only need to move the code that fills
the hvm_start_info struct, but the rest of the code already present in
alloc_magic_pages_hvm could be left as-is.

That's what I was referring to in the commit message (and in the comment below it): most, if not all, of the stuff that alloc_magic_pages_hvm() currently does really has nothing to do with magic pages allocation. E.g. hvm_params settting, special pages initialization, etc. So I figured I could move all of this to a later point. But, as I said in the comment, I am not convinced that start_info() is the right place either.


-boris

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.