[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

sh_unshadow_for_p2m_change() vs p2m_set_entry()

  • To: Tim Deegan <tim@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 24 Sep 2021 13:31:44 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=IirCkd4B5oXhSrzUbNH52RfizmlgngvG0g3h4esKx6E=; b=c4A1nnU7FxoBcdY8lb8HZrmLdONKglRmcSna9wALqOXAqW5AN2iwM2GOVnrMqUae6DgreBcUZkPcmVc9HZOoiI1WkLneSwcRBgtHqlYfPSjZY6inFIMzRavWjhws/G5iZ+QzVY4JsrufMQRmWSv6AuJRep1gBYh/gAi+b8JnGpQ6DfoW7aZ+0L+FSjVXT50nfzYDdjeDtlgRxPwocMR/ZKlWMTEn8qTZwHiWPmJQktbzd0iCXZ9RmZ0u2FXofvwGK9f3Bq7FddZoEqlUNsIh3NOOlltPDNrV7H2ePJ2D0u2IwPyljdsMMLP8lMUnb3dphHfXu/Vyovf0YMRoSEDKvQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=iobeUcx6eC3oayB70qhdNpEo3HJZ991VhZVmatqSbSSRURPSN2srMR87a7DTvzIkIpCvVOjPKc/dcdk9rS1enq8JrlXKtjfRt1XQ4lIxKf9Rk0dWMSpS+2QISt7NAPgeQ1VtERl+0iegpTLuUBwAKQCf4bbeIbocmexietvMai/r7JDrkBNMdqRqT+0aZTs/WTIfJ5S4pC/WfnMnEf4JeUlhNqZZ2rzCpTe6JXcZdVVrxA4EGPDCBh1fZ7qZuoR3598Mq6diREqE1CYLJJgH+jqodeP0VpUgvwa5A5b5s/lwAz6B8oLtnDNckCl+q/a2OyrkcHQ/w7BjBa7zqpwwiA==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 24 Sep 2021 11:31:59 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>


I'm afraid you're still my best guess to hopefully get an insight
on issues like this one.

While doing IOMMU superpage work I was, just in the background,
considering in how far the superpage re-coalescing to be used there
couldn't be re-used for P2M / EPT / NPT. Which got me to think about
shadow mode's using of p2m-pt.c: That's purely software use of the
tables in that case, isn't it? In which case hardware support for
superpages shouldn't matter at all.

The only place where I could spot that P2M superpages would actually
make a difference to shadow code was sh_unshadow_for_p2m_change().
That one appears to have been dealing with 2M pages (more below)
already at the time of 0ca1669871f8a ("P2M: check whether hap mode
is enabled before using 2mb pages"), so I wonder what "potential
errors when hap is disabled" this commit's description might be
talking about. (Really, if it's software use of the tables only, in
principle even 512Gb superpages could be made use of there. But of
course sh_unshadow_for_p2m_change() wouldn't really like this, just
like it doesn't like 1Gb pages. So that's purely a theoretical

As to sh_unshadow_for_p2m_change()'s readiness for at least 2Mb
pages: The 4k page handling there differs from the 2M one primarily
in the p2m type checks: "p2m_is_valid(...) || p2m_is_grant(...)"
vs "p2m_is_valid(...)" plus later "!p2m_is_ram(...)", the first
three acting on the type taken from the old PTE, while the latter
acts on the type in the new (split) PTEs. Shouldn't the exact same
checks be used in both cases (less the p2m_is_grant() check which
can't be true for superpages)? IOW isn't !p2m_is_ram() at least
superfluous (and perhaps further redundant with the subsequent
!mfn_eq(l1e_get_mfn(npte[i]), omfn))? Instead I'm missing an
entry-is-present check, without which l1e_get_mfn(npte[i]) looks
risky at best. Or is p2m_is_ram() considered a sufficient
replacement, making assumptions on the behavior of a lot of other

The 2M logic also first checks _PAGE_PRESENT (and _PAGE_PSE), while
the 4k logic appears to infer that the old page was present from

And isn't this 2M page handling code (because of the commit pointed
at above) dead right now anyway? If not, where would P2M superpages
come from?

Thanks much,



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.