[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2 04/13] x86/shadow: call sh_update_cr3() directly from sh_page_fault()


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 30 Mar 2023 13:27:17 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ue41tyDmmg6PevYB5Qc1ThieQRcpS48A5cxO6+NoPwo=; b=WLSm8A9L1tdmWGODm6ZDoIfx+IpwmJn2iz2DcQhkGpE6IKHgKegb0cIuiGnG5FE2FRjnyQYFV/lqKawWzGqEkVltiPdd86pGaBxsegWf5O4L+AbsJ3i9Qpwd12Ho1i+S2orY9W7T2KtP6UHi1iyK/w5j82Nv7q2+j+kQrwIhNG2kbkOtiATj/PZlkgaWwth6Doq8/1fKfS5kSkoOd4BrkX1GJYTAjPATeOsy+mDzvX1H1FhVi0/0LzFyiLxoDRGkpP2XNr5kV2jPL8OTxoQJ1gRx4uZXwMey2O/MCOKaV2uuw7NNPYh8DA1Z29pmXw60xIDWfewyNXhx/M1lL1FmQw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dC8UDeNwCb1fDsND3XweZkNMR2xrS+GeOZ2+bng0t6GCuHSv02U4YqmSQJzE+I0V7XlK+lyJ10cE05tEUr2Q35yqLlAKpHbKkvLDWxmVVQsusq+pOKL4GeIiEMnDgOjUqJzBODfaA9lDwN4GwDouf6Y/kQ64snED12TORLVmuvO7i9vc4oJQNj1SjyT1Q2ADytN51qtbBLTRVqYttcHpbyjDcLBxHNQA1Az0DFH7VZgtEtLzVVVnpgDNuNn3PacB9EPF1LSz+IKV3apciaVjH5lzurI6vQanqD5jUEtFeJ5KSOEPhWvLvyz+Yqosz3j2nl+unJ1L2FKmg01nFFUSiA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>
  • Delivery-date: Thu, 30 Mar 2023 11:27:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

There's no need for an indirect call here, as the mode is invariant
throughout the entire paging-locked region. All it takes to avoid it is
to have a forward declaration of sh_update_cr3() in place.

Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
I find this and the respective Win7 related comment suspicious: If we
really need to "fix up" L3 entries "on demand", wouldn't we better retry
the shadow_get_and_create_l1e() rather than exit? The spurious page
fault that the guest observes can, after all, not be known to be non-
fatal inside the guest. That's purely an OS policy.

Furthermore the sh_update_cr3() will also invalidate L3 entries which
were loaded successfully before, but invalidated by the guest
afterwards. I strongly suspect that the described hardware behavior is
_only_ to load previously not-present entries from the PDPT, but not
purge ones already marked present. IOW I think sh_update_cr3() would
need calling in an "incremental" mode here. (The alternative of doing
this in shadow_get_and_create_l3e() instead would likely be more
cumbersome.)

Beyond the "on demand" L3 entry creation I also can't see what guest
actions could lead to the ASSERT() being inapplicable in the PAE case.
The 3-level code in shadow_get_and_create_l2e() doesn't consult guest
PDPTEs, and all other logic is similar to that for other modes.

(See 89329d832aed ["x86 shadow: Update cr3 in PAE mode when guest walk
succeed but shadow walk fails"].)

--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -91,6 +91,8 @@ const char *const fetch_type_names[] = {
 # define for_each_shadow_table(v, i) for ( (i) = 0; (i) < 1; ++(i) )
 #endif
 
+static void cf_check sh_update_cr3(struct vcpu *v, int do_locking, bool 
noflush);
+
 /* Helper to perform a local TLB flush. */
 static void sh_flush_local(const struct domain *d)
 {
@@ -2487,7 +2489,7 @@ static int cf_check sh_page_fault(
          * In any case, in the PAE case, the ASSERT is not true; it can
          * happen because of actions the guest is taking. */
 #if GUEST_PAGING_LEVELS == 3
-        v->arch.paging.mode->update_cr3(v, 0, false);
+        sh_update_cr3(v, 0, false);
 #else
         ASSERT(d->is_shutting_down);
 #endif




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.