[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 7/9] x86/shadow: Use the pagewalk reserved bits helpers



On 16/03/17 16:31, Andrew Cooper wrote:
> The shadow logic should not create a valid/present shadow of a guest PTE which
> contains reserved bits from the guests point of view.  It is not guaranteed
> that the hardware pagewalk will come to the same conclusion, and raise a
> pagefault.
>
> Shadows created on demand from the pagefault handler are fine because the
> pagewalk over the guest tables will have injected the fault into the guest
> rather than creating a shadow.
>
> However, shadows created by sh_resync_l1() and sh_prefetch() haven't undergone
> a pagewalk and need to account for reserved bits before creating the shadow.
>
> In practice, this means a 3-level guest could previously cause PTEs with bits
> 63:52 set to be shadowed (and discarded).  This PTE should cause #PF[RSVD]
> when encountered by hardware, but the installed shadow is valid and hardware
> doesn't fault.
>
> Reuse the pagewalk reserved bits helpers, and assert in
> l?e_propagate_from_guest() that shadows are not attempted to be created with
> reserved bits set.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

The travis build running clang points out that p2mt might now be
uninitialised in the l1 cases.  In practice it doesn't matter because
INVALID_MFN will cause _sh_propagate() to head in an empty() direction,
but I have folded the following delta in.

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 56114c7..627284d 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2309,7 +2309,7 @@ static int validate_gl1e(struct vcpu *v, void
*new_ge, mfn_t sl1mfn, void *se)
     shadow_l1e_t *sl1p = se;
     gfn_t gfn;
     mfn_t gmfn = INVALID_MFN;
-    p2m_type_t p2mt;
+    p2m_type_t p2mt = p2m_invalid;
     int result = 0;
 #if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
     mfn_t gl1mfn;
@@ -2379,7 +2379,7 @@ void sh_resync_l1(struct vcpu *v, mfn_t gl1mfn,
mfn_t snpmfn)
         {
             gfn_t gfn;
             mfn_t gmfn = INVALID_MFN;
-            p2m_type_t p2mt;
+            p2m_type_t p2mt = p2m_invalid;
             shadow_l1e_t nsl1e;
 
             if ( (guest_l1e_get_flags(gl1e) & _PAGE_PRESENT) &&
@@ -2721,7 +2721,10 @@ static void sh_prefetch(struct vcpu *v, walk_t *gw,
             gmfn = get_gfn_query_unlocked(d, gfn_x(gfn), &p2mt);
         }
         else
+        {
             gmfn = INVALID_MFN;
+            p2mt = p2m_invalid;
+        }
 
         /* Propagate the entry.  */
         l1e_propagate_from_guest(v, gl1e, gmfn, &sl1e, ft_prefetch, p2mt);

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.