[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH 07/16] x86/shadow: call sh_update_cr3() directly from sh_page_fault()


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 22 Mar 2023 10:33:46 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=W2tWy5kxwOmE8Z2wFPJZdCsRPw6lxbt7ejpBm5kqUhA=; b=mBtu9VxUXRwebVWE/GZ66pDUGKLXjhiTbDIQh4v0AguD9mL1Wxc5oFhLGNNnn0i4G/gmMaNTjDENbtPsdkbntVw6w4xE0LRR5fCoAH3n/TLjQ0dUZTQ/bmMFSzzAP0FUVkkdxz34CZWFm1nYGHqnMfYcS44d1/egex9yxo5/uz+JBXUOwYqkXx9wZIMV7f677FPQcfhSth+YppGVQtYAES9Oy5K0ThJZwO3+kb1m2i0k0mWKBqoo/8Orks7bFVwzh/tWOzMi+12wbneHiEFALkDRYsRfdL4hq+MNW2ZaJtzgxha4t52qV+Pebn6l1+mhbl9D4sxVkJQHc6ZlJwF1uA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=SBuxzecA0NkWJKAp/RM2UyYtDqeyoMPTjyo1P0Z8RmAu0iKNTyDz+Bp2ancdTCbHsxSGkuYBJTOo2t3fAXk7X6rBZ5ycy4Y4JIeEMYWD0r9AYSWv61XoEqU0Y6rfTCIaatUUaYxjYSIpL7CCtnAN9N6VF/IRqryOGngK7h2lORu2GPXKrGcG88XkKNPa3tG4IPgQ3XMp+bD5BogOfasVZYpmCpFUbOKnqx34Ch3pvgpzksXlXVfbThjOkniPZwhdPIqyhB4UBGlv4a9iiyPGI9JMFf1lodA/X2z8P774fGOmirqNuftfY6vkF8eiN0ndCtd9lY9c6DrA0Uk84KnbDg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Wed, 22 Mar 2023 09:34:05 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

There's no need for an indirect call here, as the mode is invariant
throughout the entire paging-locked region. All it takes to avoid it is
to have a forward declaration of sh_update_cr3() in place.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
I find this and the respective Win7 related comment suspicious: If we
really need to "fix up" L3 entries "on demand", wouldn't we better retry
the shadow_get_and_create_l1e() rather than exit? The spurious page
fault that the guest observes can, after all, not be known to be non-
fatal inside the guest. That's purely an OS policy.

Furthermore the sh_update_cr3() will also invalidate L3 entries which
were loaded successfully before, but invalidated by the guest
afterwards. I strongly suspect that the described hardware behavior is
_only_ to load previously not-present entries from the PDPT, but not
purge ones already marked present. IOW I think sh_update_cr3() would
need calling in an "incremental" mode here. (The alternative of doing
this in shadow_get_and_create_l3e() instead would likely be more
cumbersome.)

In any event emitting a TRC_SHADOW_DOMF_DYING trace record in this case
looks wrong.

Beyond the "on demand" L3 entry creation I also can't see what guest
actions could lead to the ASSERT() being inapplicable in the PAE case.
The 3-level code in shadow_get_and_create_l2e() doesn't consult guest
PDPTEs, and all other logic is similar to that for other modes.

(See 89329d832aed ["x86 shadow: Update cr3 in PAE mode when guest walk
succeed but shadow walk fails"].)

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -91,6 +91,8 @@ const char *const fetch_type_names[] = {
 # define for_each_shadow_table(v, i) for ( (i) = 0; (i) < 1; ++(i) )
 #endif
 
+static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool 
noflush);
+
 /* Helper to perform a local TLB flush. */
 static void sh_flush_local(const struct domain *d)
 {
@@ -2487,7 +2489,7 @@ static int cf_check sh_page_fault(
          * In any case, in the PAE case, the ASSERT is not true; it can
          * happen because of actions the guest is taking. */
 #if GUEST_PAGING_LEVELS == 3
-        v->arch.paging.mode->update_cr3(v, 0, false);
+        sh_update_cr3(v, 0, false);
 #else
         ASSERT(d->is_shutting_down);
 #endif




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.