[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-changelog] [xen stable-4.8] x86/mm: add explicit preemption checks to L3 (un)validation



commit 40ad83f2d60a370c884da29d7f004096087dd041
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Tue Mar 5 15:42:59 2019 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Mar 5 15:42:59 2019 +0100

    x86/mm: add explicit preemption checks to L3 (un)validation
    
    When recursive page tables are used at the L3 level, unvalidation of a
    single L4 table may incur unvalidation of two levels of L3 tables, i.e.
    a maximum iteration count of 512^3 for unvalidating an L4 table. The
    preemption check in free_l2_table() as well as the one in
    _put_page_type() may never be reached, so explicit checking is needed in
    free_l3_table().
    
    When recursive page tables are used at the L4 level, the iteration count
    at L4 alone is capped at 512^2. As soon as a present L3 entry is hit
    which itself needs unvalidation (and hence requiring another nested loop
    with 512 iterations), the preemption checks added here kick in, so no
    further preemption checking is needed at L4 (until we decide to permit
    5-level paging for PV guests).
    
    The validation side additions are done just for symmetry.
    
    This is part of XSA-290.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: bac4567a67d5e8b916801ea5a04cf8b443dfb245
    master date: 2019-03-05 13:51:46 +0100
---
 xen/arch/x86/mm.c | 27 ++++++++++++++++++++++-----
 1 file changed, 22 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 91a162923a..5948f5eadf 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1740,6 +1740,13 @@ static int alloc_l3_table(struct page_info *page)
     for ( i = page->nr_validated_ptes; i < L3_PAGETABLE_ENTRIES;
           i++, partial = 0 )
     {
+        if ( i > page->nr_validated_ptes && hypercall_preempt_check() )
+        {
+            page->nr_validated_ptes = i;
+            rc = -ERESTART;
+            break;
+        }
+
         if ( is_pv_32bit_domain(d) && (i == 3) )
         {
             if ( !(l3e_get_flags(pl3e[i]) & _PAGE_PRESENT) ||
@@ -1977,18 +1984,28 @@ static int free_l3_table(struct page_info *page)
 
     pl3e = map_domain_page(_mfn(pfn));
 
-    do {
+    for ( ; ; )
+    {
         if ( is_guest_l3_slot(i) )
         {
             rc = put_page_from_l3e(pl3e[i], pfn, partial, 0);
             if ( rc < 0 )
                 break;
+
             partial = 0;
-            if ( rc > 0 )
-                continue;
-            unadjust_guest_l3e(pl3e[i], d);
+            if ( rc == 0 )
+                unadjust_guest_l3e(pl3e[i], d);
         }
-    } while ( i-- );
+
+        if ( !i-- )
+            break;
+
+        if ( hypercall_preempt_check() )
+        {
+            rc = -EINTR;
+            break;
+        }
+    }
 
     unmap_domain_page(pl3e);
 
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.8

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/xen-changelog

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.