[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen staging-4.14] x86/shadow: tolerate failure of sh_set_toplevel_shadow()



commit 0bab3abf73783da66af8cf7cf7aabb7d86caa035
Author:     Jan Beulich <jbeulich@xxxxxxxx>
AuthorDate: Tue Oct 11 15:35:43 2022 +0200
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Oct 11 15:35:43 2022 +0200

    x86/shadow: tolerate failure of sh_set_toplevel_shadow()
    
    Subsequently sh_set_toplevel_shadow() will be adjusted to install a
    blank entry in case prealloc fails. There are, in fact, pre-existing
    error paths which would put in place a blank entry. The 4- and 2-level
    code in sh_update_cr3(), however, assume the top level entry to be
    valid.
    
    Hence bail from the function in the unlikely event that it's not. Note
    that 3-level logic works differently: In particular a guest is free to
    supply a PDPTR pointing at 4 non-present (or otherwise deemed invalid)
    entries. The guest will crash, but we already cope with that.
    
    Really mfn_valid() is likely wrong to use in sh_set_toplevel_shadow(),
    and it should instead be !mfn_eq(gmfn, INVALID_MFN). Avoid such a change
    in security context, but add a respective assertion.
    
    This is part of CVE-2022-33746 / XSA-410.
    
    Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
    Acked-by: Tim Deegan <tim@xxxxxxx>
    Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    master commit: eac000978c1feb5a9ee3236ab0c0da9a477e5336
    master date: 2022-10-11 14:22:24 +0200
---
 xen/arch/x86/mm/shadow/multi.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 99e410d999..c129b8103e 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3854,6 +3854,7 @@ sh_set_toplevel_shadow(struct vcpu *v,
     /* Now figure out the new contents: is this a valid guest MFN? */
     if ( !mfn_valid(gmfn) )
     {
+        ASSERT(mfn_eq(gmfn, INVALID_MFN));
         new_entry = pagetable_null();
         goto install_new_entry;
     }
@@ -4007,6 +4008,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool 
noflush)
     if ( sh_remove_write_access(d, gmfn, 2, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l2_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
 #elif GUEST_PAGING_LEVELS == 3
     /* PAE guests have four shadow_table entries, based on the
      * current values of the guest's four l3es. */
@@ -4052,6 +4058,11 @@ sh_update_cr3(struct vcpu *v, int do_locking, bool 
noflush)
     if ( sh_remove_write_access(d, gmfn, 4, 0) != 0 )
         guest_flush_tlb_mask(d, d->dirty_cpumask);
     sh_set_toplevel_shadow(v, 0, gmfn, SH_type_l4_shadow);
+    if ( unlikely(pagetable_is_null(v->arch.shadow_table[0])) )
+    {
+        ASSERT(d->is_dying || d->is_shutting_down);
+        return;
+    }
     if ( !shadow_mode_external(d) && !is_pv_32bit_domain(d) )
     {
         mfn_t smfn = pagetable_get_mfn(v->arch.shadow_table[0]);
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.14



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.