[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging] x86/mm: widen locked region in xenmem_add_to_physmap_one()
commit f147422bf9476fb8161b43e35f5901571ed17c35 Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Wed Aug 25 14:17:56 2021 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Wed Aug 25 14:17:56 2021 +0200 x86/mm: widen locked region in xenmem_add_to_physmap_one() For pages which can be made part of the P2M by the guest, but which can also later be de-allocated (grant table v2 status pages being the present example), it is imperative that they be mapped at no more than a single GFN. We therefore need to make sure that of two parallel XENMAPSPACE_grant_table requests for the same status page one completes before the second checks at which other GFN the underlying MFN is presently mapped. Pull ahead the respective get_gfn() and push down the respective put_gfn(). This leverages that gfn_lock() really aliases p2m_lock(), but the function makes this assumption already anyway: In the XENMAPSPACE_gmfn case lock nesting constraints for both involved GFNs would otherwise need to be enforced to avoid ABBA deadlocks. This is CVE-2021-28697 / XSA-379. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Julien Grall <jgrall@xxxxxxxxxx> --- xen/arch/x86/mm/p2m.c | 26 ++++++++++++++++++-------- 1 file changed, 18 insertions(+), 8 deletions(-) diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c index 09cbd8a07d..1d17499543 100644 --- a/xen/arch/x86/mm/p2m.c +++ b/xen/arch/x86/mm/p2m.c @@ -2756,17 +2756,29 @@ int xenmem_add_to_physmap_one( goto put_both; } + /* + * Note that we're (ab)using GFN locking (to really be locking of the + * entire P2M) here in (at least) two ways: Finer grained locking would + * expose lock order violations in the XENMAPSPACE_gmfn case (due to the + * earlier get_gfn_unshare() above). Plus at the very least for the grant + * table v2 status page case we need to guarantee that the same page can + * only appear at a single GFN. While this is a property we want in + * general, for pages which can subsequently be freed this imperative: + * Upon freeing we wouldn't be able to find other mappings in the P2M + * (unless we did a brute force search). + */ + prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt); + /* XENMAPSPACE_gmfn: Check if the MFN is associated with another GFN. */ old_gpfn = get_gpfn_from_mfn(mfn_x(mfn)); ASSERT(!SHARED_M2P(old_gpfn)); if ( space == XENMAPSPACE_gmfn && old_gpfn != gfn ) { rc = -EXDEV; - goto put_both; + goto put_all; } /* Remove previously mapped page if it was present. */ - prev_mfn = get_gfn(d, gfn_x(gpfn), &p2mt); if ( p2mt == p2m_mmio_direct ) rc = -EPERM; else if ( mfn_valid(prev_mfn) ) @@ -2778,20 +2790,18 @@ int xenmem_add_to_physmap_one( /* Normal domain memory is freed, to avoid leaking memory. */ rc = guest_remove_page(d, gfn_x(gpfn)); } - /* In the XENMAPSPACE_gmfn case we still hold a ref on the old page. */ - put_gfn(d, gfn_x(gpfn)); - - if ( rc ) - goto put_both; /* Unmap from old location, if any. */ - if ( old_gpfn != INVALID_M2P_ENTRY ) + if ( !rc && old_gpfn != INVALID_M2P_ENTRY ) rc = guest_physmap_remove_page(d, _gfn(old_gpfn), mfn, PAGE_ORDER_4K); /* Map at new location. */ if ( !rc ) rc = guest_physmap_add_page(d, gpfn, mfn, PAGE_ORDER_4K); + put_all: + put_gfn(d, gfn_x(gpfn)); + put_both: /* * In the XENMAPSPACE_gmfn case, we took a ref of the gfn at the top. -- generated by git-patchbot for /home/xen/git/xen.git#staging
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |