[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[RFC PATCH v6 17/43] arm/p2m: Move hostp2m init/teardown to individual functions



From: Rose Spangler <Rose.Spangler@xxxxxxxxxxxxxx>

This commit pulls out generic init/teardown functionality out of "p2m_init"
and "p2m_teardown" into "p2m_init_one", "p2m_teardown_one", "p2m_free_one",
and "p2m_flush_table" functions. This allows our future implementation to
reuse existing code for the initialization/teardown of altp2m views.

On failure, the p2m_init_one function returns a null pointer. This means
p2m_init_one doesn't return an error code, which prevents it from
propagating the exact error from p2m_initialise (specifically, ENOMEM if
allocation fails and EBUSY if the VMID pool is exhausted). However,
returning a p2m_domain pointer simplifies usage for callers of
p2m_init_one, as they can receive an allocated and initialized p2m_domain
without needing to handle allocation separately. Therefore, the main
p2m_init function will now return ENOMEM instead of EBUSY when the VMID
pool is exhausted.

This is commit 6/12 of the altp2m_init/altp2m_teardown routines phase.

Signed-off-by: Rose Spangler <Rose.Spangler@xxxxxxxxxxxxxx>
Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
---
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
---
v2: Added the function p2m_flush_table to the previous version.

v3: Removed struct vttbr.

    Moved define INVALID_VTTBR to p2m.h.

    Exported function prototypes of "p2m_flush_table", "p2m_init_one",
    and "p2m_teardown_one" in p2m.h.

    Extended the function "p2m_flush_table" by additionally resetting
    the fields lowest_mapped_gfn and max_mapped_gfn.

    Added a "p2m_flush_tlb" call in "p2m_flush_table". On altp2m reset
    in function "altp2m_reset", it is important to flush the TLBs after
    clearing the root table pages and before clearing the intermediate
    altp2m page tables to prevent illegal access to stalled TLB entries
    on currently active VCPUs.

    Added a check checking whether p2m->root is NULL in p2m_flush_table.

    Renamed the function "p2m_free_one" to "p2m_teardown_one".

    Removed resetting p2m->vttbr in "p2m_teardown_one", as it the p2m
    will be destroyed afterwards.

    Moved call to "p2m_alloc_table" back to "p2m_init_one".

    Moved the introduction of the type p2m_class_t out of this patch.

    Moved the backpointer to the struct domain out of the struct
    p2m_domain.

v4: Replaced the former use of clear_and_clean_page in p2m_flush_table
    by a routine that invalidates every p2m entry atomically. This
    avoids inconsistencies on CPUs that continue to use the views that
    are to be flushed (e.g., see altp2m_reset).

    Removed unnecessary initializations in the functions "p2m_init_one"
    and "p2m_teardown_one".

    Removed the define INVALID_VTTBR as it is not used any more.

    Cosmetic fixes.

v6: Reworked to accommodate the hostp2m being allocated separately from
    arch_domain.

    Split p2m_teardown_one into p2m_teardown_one and p2m_free_one, with
    both having the same semantics as their x86 counterparts.

    The previous version of this patch also added code to p2m_flush_table.
    This has been split out into a separate commit to minimize the number
    of actual changes in this commit.

    Updated to account for the introduction of p2m_final_teardown.  The
    code which previously was used to reclaim resources from the hostp2m
    during p2m_final_teardown was extracted into p2m_free_one. Now,
    p2m_final_teardown will call p2m_free_one on the hostp2m instead. The
    check for whether the p2m was actually initialized was moved into
    p2m_free_one. This means there is a slight behavior change where
    p2m_teardown_allocation will always be called, even if the p2m_domain
    was never initialized. I'm not sure if this is really a big deal (it
    does require the lock to be obtained, but this shouldn't be an issue at
    final teardown?), but if it is then I can duplicate the check from
    p2m_free_one to the top of p2m_final_teardown, which would replicate
    the previous behavior.

    For the sake of making the p2m_init_one function simple to use (and
    match the x86 function prototype), it returns a null pointer on error
    rather than an error code. While on x86 an error from p2m_init_one is
    always due to an ENOMEM return code (from xzalloc, zalloc_cpumask_var),
    on ARM p2m_initialise (called by p2m_init_one) can actually return
    EBUSY if the VMID pool is exhausted. Therefore, in this error case the
    null pointer return value of p2m_init_one obscures the true error code
    (EBUSY). Callers of p2m_init_one always return ENOMEM when p2m_init_one
    returns a null pointer, so this error will be propagated up as ENOMEM
    rather than EBUSY. The alternative to this would be to have callers
    pass a null **p2m_domain to p2m_init_one while still returning an
    integer return code. In order to preserve a common altp2m_init routine,
    this change would also have to be made on x86 (where there wouldn't be
    any real benefit, the return code is truly only ever ENOMEM).
    Therefore, it seems like an acceptable tradeoff to me to obscure the
    error in this case, but if other folks think we should handle this
    differently than I can revisit this.

    The p2m_teardown_allocation call and p2m_init_one return code changes
    mentioned above should be the only actual behavior changes in this
    patch, otherwise it should be just code movement.
---
 xen/arch/arm/include/asm/p2m.h | 12 ++++++
 xen/arch/arm/mmu/p2m.c         | 77 +++++++++++++++++++++++-----------
 2 files changed, 65 insertions(+), 24 deletions(-)

diff --git a/xen/arch/arm/include/asm/p2m.h b/xen/arch/arm/include/asm/p2m.h
index 23df91ea13e9..5c6dfe4a9789 100644
--- a/xen/arch/arm/include/asm/p2m.h
+++ b/xen/arch/arm/include/asm/p2m.h
@@ -216,6 +216,18 @@ int p2m_init(struct domain *d);
 int p2m_teardown(struct domain *d);
 void p2m_final_teardown(struct domain *d);
 
+/* Flushes the page table held by the p2m. */
+int p2m_flush_table(struct p2m_domain *p2m);
+
+/* Initialize the p2m structure. */
+struct p2m_domain *p2m_init_one(struct domain *d);
+
+/* Release resources held by the p2m structure. */
+int p2m_teardown_one(struct p2m_domain *p2m);
+
+/* Free the p2m structure allocation. */
+void p2m_free_one(struct p2m_domain *p2m);
+
 /*
  * Remove mapping refcount on each mapping page in the p2m
  *
diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c
index 1009f10e5db4..1d598c66450b 100644
--- a/xen/arch/arm/mmu/p2m.c
+++ b/xen/arch/arm/mmu/p2m.c
@@ -1444,14 +1444,10 @@ static int p2m_alloc_table(struct domain *d)
     return 0;
 }
 
-int p2m_teardown(struct domain *d)
+int p2m_flush_table(struct p2m_domain *p2m)
 {
-    struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long count = 0;
     struct page_info *pg;
-    int rc = 0;
-
-    p2m_write_lock(p2m);
 
     while ( (pg = page_list_remove_head(&p2m->pages)) )
     {
@@ -1460,23 +1456,34 @@ int p2m_teardown(struct domain *d)
         /* Arbitrarily preempt every 512 iterations */
         if ( !(count % 512) && hypercall_preempt_check() )
         {
-            rc = -ERESTART;
-            break;
+            return -ERESTART;
         }
     }
 
+    return 0;
+}
+
+int p2m_teardown_one(struct p2m_domain *p2m)
+{
+    int rc;
+
+    p2m_write_lock(p2m);
+    rc = p2m_flush_table(p2m);
     p2m_write_unlock(p2m);
 
     return rc;
 }
 
-void p2m_final_teardown(struct domain *d)
+int p2m_teardown(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
-    /* p2m not actually initialized */
-    if ( !p2m->domain )
-        goto free_p2m;
+    return p2m_teardown_one(p2m);
+}
+
+void p2m_final_teardown(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
 
     /*
      * No need to call relinquish_p2m_mapping() here because
@@ -1484,18 +1491,27 @@ void p2m_final_teardown(struct domain *d)
      * where relinquish_p2m_mapping() has been called.
      */
 
-    ASSERT(page_list_empty(&p2m->pages));
-
     while ( p2m_teardown_allocation(d) == -ERESTART )
         continue; /* No preemption support here */
     ASSERT(page_list_empty(&d->arch.paging.p2m_freelist));
 
+    p2m_free_one(p2m);
+}
+
+void p2m_free_one(struct p2m_domain *p2m)
+{
+    /* p2m not actually initialized */
+    if ( !p2m->domain )
+        goto free_p2m;
+
+    ASSERT(page_list_empty(&p2m->pages));
+
     if ( p2m->root )
         free_domheap_pages(p2m->root, P2M_ROOT_ORDER);
 
     p2m->root = NULL;
 
-    p2m_free_vmid(d);
+    p2m_free_vmid(p2m->domain);
 
     radix_tree_destroy(&p2m->mem_access_settings, NULL);
 
@@ -1511,9 +1527,7 @@ static int p2m_initialise(struct domain *d, struct 
p2m_domain *p2m)
     unsigned int cpu;
 
     rwlock_init(&p2m->lock);
-    spin_lock_init(&d->arch.paging.lock);
     INIT_PAGE_LIST_HEAD(&p2m->pages);
-    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
     p2m->vmid = INVALID_VMID;
     p2m->max_mapped_gfn = _gfn(0);
@@ -1559,22 +1573,37 @@ static int p2m_initialise(struct domain *d, struct 
p2m_domain *p2m)
     return 0;
 }
 
-int p2m_init(struct domain *d)
+struct p2m_domain *p2m_init_one(struct domain *d)
 {
     struct p2m_domain *p2m = xzalloc(struct p2m_domain);
-    int rc;
+
+    if ( !p2m )
+        return NULL;
+
+    if ( !p2m_initialise(d, p2m) )
+        return p2m;
+
+    xfree(p2m);
+    return NULL;
+}
+
+static int p2m_init_hostp2m(struct domain *d)
+{
+    struct p2m_domain *p2m = p2m_init_one(d);
 
     if ( !p2m )
         return -ENOMEM;
 
-    rc = p2m_initialise(d, p2m);
+    d->arch.p2m = p2m;
+    return 0;
+}
 
-    if ( !rc )
-        d->arch.p2m = p2m;
-    else
-        xfree(p2m);
+int p2m_init(struct domain *d)
+{
+    spin_lock_init(&d->arch.paging.lock);
+    INIT_PAGE_LIST_HEAD(&d->arch.paging.p2m_freelist);
 
-    return rc;
+    return p2m_init_hostp2m(d);
 }
 
 /*
-- 
2.34.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.