[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v3 33/38] arm/p2m: Add altp2m paging mechanism



This commit adds the function "altp2m_lazy_copy" implementing the altp2m
paging mechanism. The function "altp2m_lazy_copy" lazily copies the
hostp2m's mapping into the currently active altp2m view on 2nd stage
translation faults on instruction or data access.

Signed-off-by: Sergej Proskurin <proskurin@xxxxxxxxxxxxx>
---
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>
Cc: Julien Grall <julien.grall@xxxxxxx>
---
v3: Cosmetic fixes.

    Locked hostp2m in the function "altp2m_lazy_copy" to avoid a mapping
    being changed in hostp2m before it has been inserted into the
    valtp2m view.

    Removed unnecessary calls to "p2m_mem_access_check" in the functions
    "do_trap_instr_abort_guest" and "do_trap_data_abort_guest" after a
    translation fault has been handled by the function
    "altp2m_lazy_copy".

    Adapted "altp2m_lazy_copy" to return the value "true" if the
    encountered translation fault encounters a valid entry inside of the
    currently active altp2m view. If multiple vcpu's are using the same
    altp2m, it is likely that both generate a translation fault, whereas
    the first one will be already handled by "altp2m_lazy_copy". With
    this change the 2nd vcpu will retry accessing the faulting address.

    Changed order of altp2m checking and MMIO emulation within the
    function "do_trap_data_abort_guest".  Now, altp2m is checked and
    handled only if the MMIO does not have to be emulated.

    Changed the function prototype of "altp2m_lazy_copy".  This commit
    removes the unnecessary struct p2m_domain* from the previous
    function prototype.  Also, this commit removes the unnecessary
    argument gva.  Finally, this commit changes the address of the
    function parameter gpa from paddr_t to gfn_t and renames it to gfn.

    Moved the altp2m handling mechanism into a separate function
    "try_handle_altp2m".

    Moved the functions "p2m_altp2m_check" and
    "altp2m_switch_vcpu_altp2m_by_id" out of this patch.

    Moved applied code movement into a separate patch.
---
 xen/arch/arm/altp2m.c        | 62 ++++++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/traps.c         | 35 +++++++++++++++++++++++++
 xen/include/asm-arm/altp2m.h |  5 ++++
 3 files changed, 102 insertions(+)

diff --git a/xen/arch/arm/altp2m.c b/xen/arch/arm/altp2m.c
index 11272e9..2009bad 100644
--- a/xen/arch/arm/altp2m.c
+++ b/xen/arch/arm/altp2m.c
@@ -165,6 +165,68 @@ out:
     return rc;
 }
 
+/*
+ * The function altp2m_lazy_copy returns "false" on error.  The return value
+ * "true" signals that either the mapping has been successfully lazy-copied
+ * from the hostp2m to the currently active altp2m view or that the altp2m view
+ * holds already a valid mapping. The latter is the case if multiple vcpu's
+ * using the same altp2m view generate a translation fault that is led back in
+ * both cases to the same mapping and the first fault has been already handled.
+ */
+bool_t altp2m_lazy_copy(struct vcpu *v,
+                        gfn_t gfn,
+                        struct npfec npfec)
+{
+    struct domain *d = v->domain;
+    struct p2m_domain *hp2m = p2m_get_hostp2m(d), *ap2m = NULL;
+    p2m_type_t p2mt;
+    p2m_access_t p2ma;
+    mfn_t mfn;
+    unsigned int page_order;
+    int rc;
+
+    ap2m = altp2m_get_altp2m(v);
+    if ( ap2m == NULL)
+        return false;
+
+    /* Check if entry is part of the altp2m view. */
+    mfn = p2m_lookup_attr(ap2m, gfn, NULL, NULL, NULL);
+    if ( !mfn_eq(mfn, INVALID_MFN) )
+        /*
+         * If multiple vcpu's are using the same altp2m, it is likely that both
+         * generate a translation fault, whereas the first one will be handled
+         * successfully and the second will encounter a valid mapping that has
+         * already been added as a result of the previous translation fault.
+         * In this case, the 2nd vcpu need to retry accessing the faulting
+         * address.
+         */
+        return true;
+
+    /*
+     * Lock hp2m to prevent the hostp2m to change a mapping before it is added
+     * to the altp2m view.
+     */
+    p2m_read_lock(hp2m);
+
+    /* Check if entry is part of the host p2m view. */
+    mfn = p2m_lookup_attr(hp2m, gfn, &p2mt, &p2ma, &page_order);
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        goto out;
+
+    rc = modify_altp2m_entry(ap2m, gfn, mfn, p2mt, p2ma, page_order);
+    if ( rc )
+    {
+        gdprintk(XENLOG_ERR, "altp2m[%d] failed to set entry for %#"PRI_gfn" 
-> %#"PRI_mfn"\n",
+                 altp2m_vcpu(v).p2midx, gfn_x(gfn), mfn_x(mfn));
+        domain_crash(hp2m->domain);
+    }
+
+out:
+    p2m_read_unlock(hp2m);
+
+    return true;
+}
+
 static inline void altp2m_reset(struct p2m_domain *p2m)
 {
     p2m_write_lock(p2m);
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 0bf1653..a4c923c 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -48,6 +48,8 @@
 #include <asm/vgic.h>
 #include <asm/cpuerrata.h>
 
+#include <asm/altp2m.h>
+
 /* The base of the stack must always be double-word aligned, which means
  * that both the kernel half of struct cpu_user_regs (which is pushed in
  * entry.S) and struct cpu_info (which lives at the bottom of a Xen
@@ -2397,6 +2399,24 @@ static inline bool hpfar_is_valid(bool s1ptw, uint8_t 
fsc)
     return s1ptw || (fsc == FSC_FLT_TRANS && !check_workaround_834220());
 }
 
+static bool_t try_handle_altp2m(struct vcpu *v,
+                                paddr_t gpa,
+                                struct npfec npfec)
+{
+    bool_t rc = false;
+
+    if ( altp2m_active(v->domain) )
+        /*
+         * Copy the mapping of the faulting address into the currently
+         * active altp2m view. Return true on success or if the particular
+         * mapping has already been lazily copied to the currently active
+         * altp2m view by another vcpu. Return false otherwise.
+         */
+        rc = altp2m_lazy_copy(v, _gfn(paddr_to_pfn(gpa)), npfec);
+
+    return rc;
+}
+
 static void do_trap_instr_abort_guest(struct cpu_user_regs *regs,
                                       const union hsr hsr)
 {
@@ -2445,6 +2465,14 @@ static void do_trap_instr_abort_guest(struct 
cpu_user_regs *regs,
         break;
     case FSC_FLT_TRANS:
         /*
+         * The guest shall retry accessing the page if the altp2m handler
+         * succeeds. Otherwise, we continue injecting an instruction abort
+         * exception.
+         */
+        if ( try_handle_altp2m(current, gpa, npfec) )
+            return;
+
+        /*
          * The PT walk may have failed because someone was playing
          * with the Stage-2 page table. Walk the Stage-2 PT to check
          * if the entry exists. If it's the case, return to the guest
@@ -2547,6 +2575,13 @@ static void do_trap_data_abort_guest(struct 
cpu_user_regs *regs,
         }
 
         /*
+         * The guest shall retry accessing the page if the altp2m handler
+         * succeeds. Otherwise, we continue injecting a data abort exception.
+         */
+        if ( try_handle_altp2m(current, info.gpa, npfec) )
+            return;
+
+        /*
          * The PT walk may have failed because someone was playing
          * with the Stage-2 page table. Walk the Stage-2 PT to check
          * if the entry exists. If it's the case, return to the guest
diff --git a/xen/include/asm-arm/altp2m.h b/xen/include/asm-arm/altp2m.h
index ef80829..8e40c45 100644
--- a/xen/include/asm-arm/altp2m.h
+++ b/xen/include/asm-arm/altp2m.h
@@ -84,6 +84,11 @@ int altp2m_set_mem_access(struct domain *d,
                           p2m_access_t a,
                           gfn_t gfn);
 
+/* Alternate p2m paging mechanism. */
+bool_t altp2m_lazy_copy(struct vcpu *v,
+                        gfn_t gfn,
+                        struct npfec npfec);
+
 /* Propagates changes made to hostp2m to affected altp2m views. */
 int altp2m_propagate_change(struct domain *d,
                             gfn_t sgfn,
-- 
2.9.0


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.