[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/4] xen/memory, tools: Avoid hardcoding GUEST_MAGIC_BASE in init-dom0less


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Henry Wang <xin.wang2@xxxxxxx>
  • Date: Mon, 8 Apr 2024 16:12:13 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none (0)
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=T1sonoLhGzw/2bSfHhBfvUL6L2IYcQ/CpT0bzHR994k=; b=oV+Q5zAGObngdDhvIqN0UlbTxhBoasL5vWH0qeSphhUEUHGhUohY29SW2HAL2eYuY6ZW54UoNYpD+yaLLaownnDh4Ehy4+LKyJ6TmwJ5GWKJmRYp04K5Y8Q7Gn1Oz/olqJmOheRTWdau6RSTZDP2RfCyCYyL7Yz8ckfNZLoelpTr+dFtroJeUYRfYhVbz+2iyFQcW4RvTbcdDrImDtn69nqLIc+pHHJkmmmCO1I+uMjA+IOl0yIBTxUDY4bydv43E2Y7/6ZsM7BiB7BorioDeZ4jxnM1N2QjbIBPzPPsunbFK85mN30SYolbf0WAShTcbsSVbDciFx+nJA8twaBonQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=VDWKTzN7r8uerAVttRvtafoSxUyV64NsR39u/CY5UNxv0JvxDFH/5HrRo5oYlsF/mscvyhnkXV3obcS425fGrvukZYKsyOyAUGy2IsAv7FecTyL2r5WwSfwhMnIC0soHJFwZZLe2hVSsC9AYkYQo0ACscmaeozXlqgUN5wh8lFfw6UM1oTi8ij6+CSVo92IMll+ITlJk+sqznx7SGOylv99ZWIMMYYTq4xuN28LD8nLpYg7Rg7Dj9rmQUG7VAn3IPEOrU59hJVl+QCnCGwNflconbBdtsU9+9S2fOiB08Esl5WBNPPCk1Gah6AAPIPOju0XbzlqhQ5vN38F7y4EcWg==
  • Cc: Anthony PERARD <anthony.perard@xxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, "Julien Grall" <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "Alec Kwapis" <alec.kwapis@xxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 08 Apr 2024 09:28:33 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Hi Jan,

On 4/8/2024 3:03 PM, Jan Beulich wrote:
On 08.04.2024 08:59, Henry Wang wrote:
Hi Jan,

On 4/8/2024 2:22 PM, Jan Beulich wrote:
On 08.04.2024 05:19, Henry Wang wrote:
On 4/4/2024 5:38 PM, Jan Beulich wrote:
On 03.04.2024 10:16, Henry Wang wrote:
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -41,6 +41,11 @@
    #define XENMEMF_exact_node(n) (XENMEMF_node(n) | XENMEMF_exact_node_request)
    /* Flag to indicate the node specified is virtual node */
    #define XENMEMF_vnode  (1<<18)
+/*
+ * Flag to force populate physmap to use pages from domheap instead of 1:1
+ * or static allocation.
+ */
+#define XENMEMF_force_heap_alloc  (1<<19)
As before, a separate new sub-op would look to me as being the cleaner
approach, avoiding the need to consume a bit position for something not
even going to be used on all architectures.
Like discussed in v2, I doubt that if introducing a new sub-op, the
helpers added to duplicate mainly populate_physmap() and the toolstack
helpers would be a good idea.
I'm curious what amount of duplication you still see left. By suitably
adding a new parameter, there should be very little left.
The duplication I see so far is basically the exact
xc_domain_populate_physmap(), say
xc_domain_populate_physmap_heap_alloc(). In init-dom0less.c, We can
replace the original call xc_domain_populate_physmap_exact() to call the
newly added xc_domain_populate_physmap_heap_alloc() which evokes the new
sub-op, then from the hypervisor side we set the alias MEMF flag and
share the populate_physmap().

Adding a new parameter to xc_domain_populate_physmap() or maybe even
xc_domain_populate_physmap_exact() is also a good idea (thanks). I was
just worrying there are already too many use cases of these two
functions in the existing code: there are 14 for
xc_domain_populate_physmap_exact() and 8 for
xc_domain_populate_physmap(). Adding a new parameter needs the update of
all these and the function declaration. If you really insist this way, I
can do this, sure.
You don't need to change all the callers. You can morph
xc_domain_populate_physmap() into an internal helper, which a new trivial
wrapper named xc_domain_populate_physmap() would then call, alongside with
the new trivial wrapper you want to introduce.

Thanks for the good suggestion. Would below key diff make sense to you (naming can be further discussed)? Also by checking the code, if we go this way, maybe we can even simplify the xc_domain_decrease_reservation() and xc_domain_increase_reservation()? (Although there are some hardcoded hypercall name in the error message and some small differences between the memflags)

diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index 8363657dae..5547841e6a 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1124,12 +1124,13 @@ int xc_domain_claim_pages(xc_interface *xch,
     return err;
 }

-int xc_domain_populate_physmap(xc_interface *xch,
-                               uint32_t domid,
-                               unsigned long nr_extents,
-                               unsigned int extent_order,
-                               unsigned int mem_flags,
-                               xen_pfn_t *extent_start)
+static int xc_populate_physmap_cmd(xc_interface *xch,
+                                   unsigned int cmd;
+                                   uint32_t domid,
+                                   unsigned long nr_extents,
+                                   unsigned int extent_order,
+                                   unsigned int mem_flags,
+                                   xen_pfn_t *extent_start)
 {
     int err;
     DECLARE_HYPERCALL_BOUNCE(extent_start, nr_extents * sizeof(*extent_start), XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
@@ -1147,12 +1148,50 @@ int xc_domain_populate_physmap(xc_interface *xch,
     }
     set_xen_guest_handle(reservation.extent_start, extent_start);

-    err = xc_memory_op(xch, XENMEM_populate_physmap, &reservation, sizeof(reservation));
+    err = xc_memory_op(xch, cmd, &reservation, sizeof(reservation));

     xc_hypercall_bounce_post(xch, extent_start);
     return err;
 }

+int xc_domain_populate_physmap(xc_interface *xch,
+                               uint32_t domid,
+                               unsigned long nr_extents,
+                               unsigned int extent_order,
+                               unsigned int mem_flags,
+                               xen_pfn_t *extent_start)
+{
+    return xc_populate_physmap_cmd(xch, XENMEM_populate_physmap, domid,
+                                   nr_extents, extent_order, mem_flags,
+                                   extent_start);
+}
+
+int xc_domain_populate_physmap_heap_exact(xc_interface *xch,
+                                          uint32_t domid,
+                                          unsigned long nr_extents,
+                                          unsigned int extent_order,
+                                          unsigned int mem_flags,
+                                          xen_pfn_t *extent_start)
+{
+    int err;
+
+    err = xc_populate_physmap_cmd(xch, XENMEM_populate_physmap_heapalloc,
+                                  domid, nr_extents, extent_order, mem_flags,
+                                  extent_start);
+    if ( err == nr_extents )
+        return 0;
+
+    if ( err >= 0 )
+    {
+        DPRINTF("Failed allocation for dom %d: %ld extents of order %d\n",
+                domid, nr_extents, extent_order);
+        errno = EBUSY;
+        err = -1;
+    }
+
+    return err;
+}
+


Kind regards,
Henry

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.