[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 2/2] x86/shadow: mark more of sh_page_fault() HVM-only


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 19 Jan 2023 14:19:33 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=n1LHoiPUScSmgNZv/atqS4HErHP9rkJVOgHZWk7M6S4=; b=GupGzGHOHo8uRAaCNAwn1/jAHzw1abEKMy3dcv6UeDb29lnrCBtfGpe7YZBqGxL7mG50FCQgWz9I8YC9oyprzfj+SQKkTk15Qgvjzyi9sspQL+nv9SnthCrGqvGUqEVFCeJ6l7VEiT529zsDMQ/JBZl+BeBM1p/e2Kt82ac58gyGuwvEfIYoeIOOjtsP9t8m51hFGqA5NW0R4/qcfxtnAZycNe9IJW16W3mzbVWw/Au7DkuE9UxWrUnf6UmmAPyKd9dFf8J52RLHc96uAxkwT2wti8KLZKqoRLY0u3WVBY+8ztSp2mLy+VxuuzohxuwVVz+DYK3ddiE754xneI59tw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JuCDXcLn+ohS8CVsjCBVA2i5qgThdu9RU6b5GkS3fRmDLjZHpT7AmJ6DVD6jI0UJMQXptiZqw9fVAC2K91McbcOfkxBdsUgJ4LVos3kWKrzmZCh7wSo9J/nfEuL4uxu7cmx4GkNBOAGKAbQnQz8hssy9ib0W0UCWoPEGFjy2ESweLv5moHOLIGptM0bl4CyaNfLK+dpAMmEB6u+JmN2VNNeY44Q7O/guMZWt6xItQya5wOkCf0SZJwdpgUv6qgiNn1ftvysDPE4IHonJYxPN45Bs1ZlHFSs6xnviTJlx6FZtH1bPt4WVweCeTrP67g5ehLKmNXK0p9MTiNRx7LR7Wg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>
  • Delivery-date: Thu, 19 Jan 2023 13:19:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Neither p2m_mmio_dm nor the types p2m_is_readonly() checks for are
applicable to PV; specifically get_gfn() won't ever return such a type
for PV domains. Adjacent to those checks is yet another is_hvm_...()
one. With that block made conditional, another conditional block near
the end of the function can be widened.

Furthermore the shadow_mode_refcounts() check immediately ahead of the
aforementioned newly inserted #ifdef renders redundant two subsequent
is_hvm_domain() checks (the latter of which was already redundant with
the former).

Also guest_mode() checks are pointless when we already know we're
dealing with a HVM domain.

Finally style-adjust a comment which otherwise would be fully visible as
patch context anyway.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
I'm not convinced of the usefulness of the ASSERT() immediately after
the "mmio" label. Additionally I think the code there would better move
to the single place where we presently have "goto mmio", bringing things
more in line with the other handle_mmio_with_translation() invocation
site.

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2158,8 +2158,8 @@ static int cf_check sh_page_fault(
     gfn_t gfn = _gfn(0);
     mfn_t gmfn, sl1mfn = _mfn(0);
     shadow_l1e_t sl1e, *ptr_sl1e;
-    paddr_t gpa;
 #ifdef CONFIG_HVM
+    paddr_t gpa;
     struct sh_emulate_ctxt emul_ctxt;
     const struct x86_emulate_ops *emul_ops;
     int r;
@@ -2583,6 +2583,7 @@ static int cf_check sh_page_fault(
         goto emulate;
     }
 
+#ifdef CONFIG_HVM
     /* Need to hand off device-model MMIO to the device model */
     if ( p2mt == p2m_mmio_dm )
     {
@@ -2614,13 +2615,14 @@ static int cf_check sh_page_fault(
         perfc_incr(shadow_fault_emulate_wp);
         goto emulate;
     }
+#endif
 
     perfc_incr(shadow_fault_fixed);
     d->arch.paging.log_dirty.fault_count++;
     sh_reset_early_unshadow(v);
 
     trace_shadow_fixup(gw.l1e, va);
- done:
+ done: __maybe_unused;
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("fixed\n");
     shadow_audit_tables(v);
@@ -2629,9 +2631,10 @@ static int cf_check sh_page_fault(
     return EXCRET_fault_fixed;
 
  emulate:
-    if ( !shadow_mode_refcounts(d) || !guest_mode(regs) )
+    if ( !shadow_mode_refcounts(d) )
         goto not_a_shadow_fault;
 
+#ifdef CONFIG_HVM
     /*
      * We do not emulate user writes. Instead we use them as a hint that the
      * page is no longer a page table. This behaviour differs from native, but
@@ -2653,17 +2656,11 @@ static int cf_check sh_page_fault(
      * caught by user-mode page-table check above.
      */
  emulate_readonly:
-    if ( !is_hvm_domain(d) )
-    {
-        ASSERT_UNREACHABLE();
-        goto not_a_shadow_fault;
-    }
-
-#ifdef CONFIG_HVM
-    /* Unshadow if we are writing to a toplevel pagetable that is
-     * flagged as a dying process, and that is not currently used. */
-    if ( sh_mfn_is_a_page_table(gmfn) && is_hvm_domain(d) &&
-         mfn_to_page(gmfn)->pagetable_dying )
+    /*
+     * Unshadow if we are writing to a toplevel pagetable that is
+     * flagged as a dying process, and that is not currently used.
+     */
+    if ( sh_mfn_is_a_page_table(gmfn) && mfn_to_page(gmfn)->pagetable_dying )
     {
         int used = 0;
         struct vcpu *tmp;
@@ -2867,13 +2864,9 @@ static int cf_check sh_page_fault(
  emulate_done:
     SHADOW_PRINTK("emulated\n");
     return EXCRET_fault_fixed;
-#endif /* CONFIG_HVM */
 
  mmio:
-    if ( !guest_mode(regs) )
-        goto not_a_shadow_fault;
-#ifdef CONFIG_HVM
-    ASSERT(is_hvm_vcpu(v));
+    ASSERT(is_hvm_domain(d));
     perfc_incr(shadow_fault_mmio);
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("mmio %#"PRIpaddr"\n", gpa);
@@ -2884,9 +2877,7 @@ static int cf_check sh_page_fault(
     trace_shadow_gen(TRC_SHADOW_MMIO, va);
     return (handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
             ? EXCRET_fault_fixed : 0);
-#else
-    BUG();
-#endif
+#endif /* CONFIG_HVM */
 
  not_a_shadow_fault:
     sh_audit_gw(v, &gw);




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.