[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/4] xen/arm: Alloc hypervisor reserved pages as magic pages for Dom0less DomUs



Hi Henry,

On 11/05/2024 01:56, Henry Wang wrote:
There are use cases (for example using the PV driver) in Dom0less
setup that require Dom0less DomUs start immediately with Dom0, but
initialize XenStore later after Dom0's successful boot and call to
the init-dom0less application.

An error message can seen from the init-dom0less application on
1:1 direct-mapped domains:
```
Allocating magic pages
memory.c:238:d0v0 mfn 0x39000 doesn't belong to d1
Error on alloc magic pages
```

The "magic page" is a terminology used in the toolstack as reserved
pages for the VM to have access to virtual platform capabilities.
Currently the magic pages for Dom0less DomUs are populated by the
init-dom0less app through populate_physmap(), and populate_physmap()
automatically assumes gfn == mfn for 1:1 direct mapped domains. This
cannot be true for the magic pages that are allocated later from the
init-dom0less application executed in Dom0. For domain using statically
allocated memory but not 1:1 direct-mapped, similar error "failed to
retrieve a reserved page" can be seen as the reserved memory list is
empty at that time.

To solve above issue, this commit allocates hypervisor reserved pages
(currently used as the magic pages) for Arm Dom0less DomUs at the
domain construction time. The base address/PFN of the region will be
noted and communicated to the init-dom0less application in Dom0.

Reported-by: Alec Kwapis <alec.kwapis@xxxxxxxxxxxxx>
Suggested-by: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>
Signed-off-by: Henry Wang <xin.wang2@xxxxxxx>
---
v2:
- Reword the commit msg to explain what is "magic page" and use generic
   terminology "hypervisor reserved pages" in commit msg. (Daniel)
- Also move the offset definition of magic pages. (Michal)
- Extract the magic page allocation logic to a function. (Michal)
---
  tools/libs/guest/xg_dom_arm.c |  6 ------
  xen/arch/arm/dom0less-build.c | 32 ++++++++++++++++++++++++++++++++
  xen/include/public/arch-arm.h |  6 ++++++
  3 files changed, 38 insertions(+), 6 deletions(-)

diff --git a/tools/libs/guest/xg_dom_arm.c b/tools/libs/guest/xg_dom_arm.c
index 2fd8ee7ad4..8c579d7576 100644
--- a/tools/libs/guest/xg_dom_arm.c
+++ b/tools/libs/guest/xg_dom_arm.c
@@ -25,12 +25,6 @@
#include "xg_private.h" -#define NR_MAGIC_PAGES 4
-#define CONSOLE_PFN_OFFSET 0
-#define XENSTORE_PFN_OFFSET 1
-#define MEMACCESS_PFN_OFFSET 2
-#define VUART_PFN_OFFSET 3
-
  #define LPAE_SHIFT 9
#define PFN_4K_SHIFT (0)
diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
index 74f053c242..4b96ddd9ce 100644
--- a/xen/arch/arm/dom0less-build.c
+++ b/xen/arch/arm/dom0less-build.c
@@ -739,6 +739,34 @@ static int __init alloc_xenstore_evtchn(struct domain *d)
      return 0;
  }
+static int __init alloc_magic_pages(struct domain *d)
+{
+    struct page_info *magic_pg;
+    mfn_t mfn;
+    gfn_t gfn;
+    int rc;
+
+    d->max_pages += NR_MAGIC_PAGES;
+    magic_pg = alloc_domheap_pages(d, get_order_from_pages(NR_MAGIC_PAGES), 0);
+    if ( magic_pg == NULL )
+        return -ENOMEM;
+
+    mfn = page_to_mfn(magic_pg);
+    if ( !is_domain_direct_mapped(d) )
+        gfn = gaddr_to_gfn(GUEST_MAGIC_BASE);
+    else
+        gfn = gaddr_to_gfn(mfn_to_maddr(mfn));

Summarizing the discussion we had on Matrix. Regions like the extend area and shared memory may not be direct mapped. So unfortunately, I think it is possible that the GFN could clash with one of those.

At least in the shared memory case, the user can provide the address. But as you use the domheap allocator, the address returned could easily change if you tweak your setup.

I am not entirely sure what's the best solution. We could ask the user to provide the information for reserved region. But it feels like we are exposing a bit too much to the user.

So possibly we would want to use the same approach as extended regions. Once we processed all the mappings, find some space for the hypervisor regions.

Any other suggestions?

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.