|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen stable-4.19] x86/mm: skip super-page alignment checks for non-present entries
commit f2e41f075dc903fca83211955cb2b8221d7bb7f6
Author: Roger Pau Monné <roger.pau@xxxxxxxxxx>
AuthorDate: Mon Nov 25 12:02:53 2024 +0100
Commit: Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Mon Nov 25 12:02:53 2024 +0100
x86/mm: skip super-page alignment checks for non-present entries
INVALID_MFN is ~0, so by it having all bits as 1s it doesn't fulfill the
super-page address alignment checks for L3 and L2 entries. Skip the
alignment
checks if the new entry is a non-present one.
This fixes a regression introduced by 0b6b51a69f4d, where the switch from 0
to
INVALID_MFN caused all super-pages to be shattered when attempting to remove
mappings by passing INVALID_MFN instead of 0.
Fixes: 0b6b51a69f4d ('xen/mm: Switch map_pages_to_xen to use MFN typesafe')
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
x86/mm: fix alignment check for non-present entries
While the alignment of the mfn is not relevant for non-present entries, the
alignment of the linear address is. Commit 5b52e1b0436f introduced a
regression by not checking the alignment of the linear address when the new
entry was a non-present one.
Fix by always checking the alignment of the linear address, non-present
entries
must just skip the alignment check of the physical address.
Fixes: 5b52e1b0436f ('x86/mm: skip super-page alignment checks for
non-present entries')
Reported-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Suggested-by: Jan Beulich <jbeulich@xxxxxxxx>
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
master commit: 5b52e1b0436f4adb784562f4d05ae67605ce8ce7
master date: 2024-11-14 16:12:35 +0100
master commit: b1ebb6461a027f07e4a844cae348fbd9cfabe984
master date: 2024-11-15 14:14:12 +0100
---
xen/arch/x86/mm.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 373a91e3d7..07631067ae 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5227,10 +5227,17 @@ int map_pages_to_xen(
} \
} while (0)
-/* Check if a (virt, mfn) tuple is aligned for a given slot level. */
-#define IS_LnE_ALIGNED(v, m, n) \
- IS_ALIGNED(PFN_DOWN(v) | mfn_x(m), \
- (1UL << (PAGETABLE_ORDER * ((n) - 1))) - 1)
+/*
+ * Check if a (virt, mfn) tuple is aligned for a given slot level. m must not
+ * be INVALID_MFN, since alignment is only relevant for present entries.
+ */
+#define IS_LnE_ALIGNED(v, m, n) ({ \
+ mfn_t m_ = m; \
+ \
+ ASSERT(!mfn_eq(m_, INVALID_MFN)); \
+ IS_ALIGNED(PFN_DOWN(v) | mfn_x(m_), \
+ (1UL << (PAGETABLE_ORDER * ((n) - 1))) - 1); \
+})
#define IS_L2E_ALIGNED(v, m) IS_LnE_ALIGNED(v, m, 2)
#define IS_L3E_ALIGNED(v, m) IS_LnE_ALIGNED(v, m, 3)
@@ -5251,7 +5258,8 @@ int map_pages_to_xen(
L3T_LOCK(current_l3page);
ol3e = *pl3e;
- if ( cpu_has_page1gb && IS_L3E_ALIGNED(virt, mfn) &&
+ if ( cpu_has_page1gb &&
+ IS_L3E_ALIGNED(virt, flags & _PAGE_PRESENT ? mfn : _mfn(0)) &&
nr_mfns >= (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT)) &&
!(flags & (_PAGE_PAT | MAP_SMALL_PAGES)) )
{
@@ -5371,7 +5379,7 @@ int map_pages_to_xen(
if ( !pl2e )
goto out;
- if ( IS_L2E_ALIGNED(virt, mfn) &&
+ if ( IS_L2E_ALIGNED(virt, flags & _PAGE_PRESENT ? mfn : _mfn(0)) &&
(nr_mfns >= (1u << PAGETABLE_ORDER)) &&
!(flags & (_PAGE_PAT|MAP_SMALL_PAGES)) )
{
--
generated by git-patchbot for /home/xen/git/xen.git#stable-4.19
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |